Jan 31 05:16:57 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 05:16:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 05:16:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:16:57 localhost kernel: BIOS-provided physical RAM map:
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 05:16:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 05:16:57 localhost kernel: NX (Execute Disable) protection: active
Jan 31 05:16:57 localhost kernel: APIC: Static calls initialized
Jan 31 05:16:57 localhost kernel: SMBIOS 2.8 present.
Jan 31 05:16:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 05:16:57 localhost kernel: Hypervisor detected: KVM
Jan 31 05:16:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 05:16:57 localhost kernel: kvm-clock: using sched offset of 3946228460 cycles
Jan 31 05:16:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 05:16:57 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 31 05:16:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 05:16:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 05:16:57 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 05:16:57 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 05:16:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 05:16:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 05:16:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 05:16:57 localhost kernel: Using GB pages for direct mapping
Jan 31 05:16:57 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 05:16:57 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 05:16:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 05:16:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:16:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:16:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:16:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 05:16:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:16:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 05:16:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 05:16:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 05:16:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 05:16:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 05:16:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 05:16:57 localhost kernel: No NUMA configuration found
Jan 31 05:16:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 05:16:57 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 05:16:57 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 05:16:57 localhost kernel: Zone ranges:
Jan 31 05:16:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 05:16:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 05:16:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 05:16:57 localhost kernel:   Device   empty
Jan 31 05:16:57 localhost kernel: Movable zone start for each node
Jan 31 05:16:57 localhost kernel: Early memory node ranges
Jan 31 05:16:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 05:16:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 05:16:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 05:16:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 05:16:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 05:16:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 05:16:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 05:16:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 05:16:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 05:16:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 05:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 05:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 05:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 05:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 05:16:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 05:16:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 05:16:57 localhost kernel: TSC deadline timer available
Jan 31 05:16:57 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 05:16:57 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 05:16:57 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 05:16:57 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 05:16:57 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 05:16:57 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 05:16:57 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 05:16:57 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 05:16:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 05:16:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 05:16:57 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 05:16:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 05:16:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 05:16:57 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 05:16:57 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 05:16:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 05:16:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 05:16:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:16:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 05:16:57 localhost kernel: random: crng init done
Jan 31 05:16:57 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 05:16:57 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 05:16:57 localhost kernel: Fallback order for Node 0: 0 
Jan 31 05:16:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 05:16:57 localhost kernel: Policy zone: Normal
Jan 31 05:16:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 05:16:57 localhost kernel: software IO TLB: area num 8.
Jan 31 05:16:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 05:16:57 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 05:16:57 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 05:16:57 localhost kernel: Dynamic Preempt: voluntary
Jan 31 05:16:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 05:16:57 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 05:16:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 05:16:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 05:16:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 05:16:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 05:16:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 05:16:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 05:16:57 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:16:57 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:16:57 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 05:16:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 05:16:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 05:16:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 05:16:57 localhost kernel: Console: colour VGA+ 80x25
Jan 31 05:16:57 localhost kernel: printk: console [ttyS0] enabled
Jan 31 05:16:57 localhost kernel: ACPI: Core revision 20230331
Jan 31 05:16:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 05:16:57 localhost kernel: x2apic enabled
Jan 31 05:16:57 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 05:16:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 05:16:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 05:16:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 05:16:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 05:16:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 05:16:57 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 05:16:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 05:16:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 05:16:57 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 05:16:57 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 05:16:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 05:16:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 05:16:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 05:16:57 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 05:16:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 05:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 05:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 05:16:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 05:16:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 05:16:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 05:16:57 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 05:16:57 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 05:16:57 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 05:16:57 localhost kernel: landlock: Up and running.
Jan 31 05:16:57 localhost kernel: Yama: becoming mindful.
Jan 31 05:16:57 localhost kernel: SELinux:  Initializing.
Jan 31 05:16:57 localhost kernel: LSM support for eBPF active
Jan 31 05:16:57 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 05:16:57 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 05:16:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 05:16:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 05:16:57 localhost kernel: ... version:                0
Jan 31 05:16:57 localhost kernel: ... bit width:              48
Jan 31 05:16:57 localhost kernel: ... generic registers:      6
Jan 31 05:16:57 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 05:16:57 localhost kernel: ... max period:             00007fffffffffff
Jan 31 05:16:57 localhost kernel: ... fixed-purpose events:   0
Jan 31 05:16:57 localhost kernel: ... event mask:             000000000000003f
Jan 31 05:16:57 localhost kernel: signal: max sigframe size: 1776
Jan 31 05:16:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 05:16:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 05:16:57 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 05:16:57 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 05:16:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 05:16:57 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 05:16:57 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 05:16:57 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 31 05:16:57 localhost kernel: Memory: 7763668K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618412K reserved, 0K cma-reserved)
Jan 31 05:16:57 localhost kernel: devtmpfs: initialized
Jan 31 05:16:57 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 05:16:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 05:16:57 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 05:16:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 05:16:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 05:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 05:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 05:16:57 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 05:16:57 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 05:16:57 localhost kernel: audit: type=2000 audit(1769836616.322:1): state=initialized audit_enabled=0 res=1
Jan 31 05:16:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 05:16:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 05:16:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 05:16:57 localhost kernel: cpuidle: using governor menu
Jan 31 05:16:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 05:16:57 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 05:16:57 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 05:16:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 05:16:57 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 05:16:57 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 05:16:57 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 05:16:57 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 05:16:57 localhost kernel: Demotion targets for Node 0: null
Jan 31 05:16:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 05:16:57 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 05:16:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 05:16:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 05:16:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 05:16:57 localhost kernel: ACPI: Interpreter enabled
Jan 31 05:16:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 05:16:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 05:16:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 05:16:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 05:16:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 05:16:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 05:16:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [3] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [4] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [5] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [6] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [7] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [8] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [9] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [10] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [11] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [12] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [13] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [14] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [15] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [16] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [17] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [18] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [19] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [20] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [21] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [22] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [23] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [24] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [25] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [26] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [27] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [28] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [29] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [30] registered
Jan 31 05:16:57 localhost kernel: acpiphp: Slot [31] registered
Jan 31 05:16:57 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 05:16:57 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 05:16:57 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 05:16:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 05:16:57 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 05:16:57 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 05:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 05:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 05:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 05:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 05:16:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 05:16:57 localhost kernel: iommu: Default domain type: Translated
Jan 31 05:16:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 05:16:57 localhost kernel: SCSI subsystem initialized
Jan 31 05:16:57 localhost kernel: ACPI: bus type USB registered
Jan 31 05:16:57 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 05:16:57 localhost kernel: usbcore: registered new interface driver hub
Jan 31 05:16:57 localhost kernel: usbcore: registered new device driver usb
Jan 31 05:16:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 05:16:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 05:16:57 localhost kernel: PTP clock support registered
Jan 31 05:16:57 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 05:16:57 localhost kernel: NetLabel: Initializing
Jan 31 05:16:57 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 05:16:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 05:16:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 05:16:57 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 05:16:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 05:16:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 05:16:57 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 05:16:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 05:16:57 localhost kernel: vgaarb: loaded
Jan 31 05:16:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 05:16:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 05:16:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 05:16:57 localhost kernel: pnp: PnP ACPI init
Jan 31 05:16:57 localhost kernel: pnp 00:03: [dma 2]
Jan 31 05:16:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 05:16:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 05:16:57 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 05:16:57 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 05:16:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 05:16:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 05:16:57 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 05:16:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 05:16:57 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 05:16:57 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 05:16:57 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 05:16:57 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 05:16:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 05:16:57 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 05:16:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 05:16:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 05:16:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 05:16:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 27203 usecs
Jan 31 05:16:57 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 05:16:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 05:16:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 05:16:57 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 05:16:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 05:16:57 localhost kernel: Initialise system trusted keyrings
Jan 31 05:16:57 localhost kernel: Key type blacklist registered
Jan 31 05:16:57 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 05:16:57 localhost kernel: zbud: loaded
Jan 31 05:16:57 localhost kernel: integrity: Platform Keyring initialized
Jan 31 05:16:57 localhost kernel: integrity: Machine keyring initialized
Jan 31 05:16:57 localhost kernel: Freeing initrd memory: 88000K
Jan 31 05:16:57 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 05:16:57 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 05:16:57 localhost kernel: Key type asymmetric registered
Jan 31 05:16:57 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 05:16:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 05:16:57 localhost kernel: io scheduler mq-deadline registered
Jan 31 05:16:57 localhost kernel: io scheduler kyber registered
Jan 31 05:16:57 localhost kernel: io scheduler bfq registered
Jan 31 05:16:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 05:16:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 05:16:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 05:16:57 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 05:16:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 05:16:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 05:16:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 05:16:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 05:16:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 05:16:57 localhost kernel: Non-volatile memory driver v1.3
Jan 31 05:16:57 localhost kernel: rdac: device handler registered
Jan 31 05:16:57 localhost kernel: hp_sw: device handler registered
Jan 31 05:16:57 localhost kernel: emc: device handler registered
Jan 31 05:16:57 localhost kernel: alua: device handler registered
Jan 31 05:16:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 05:16:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 05:16:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 05:16:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 05:16:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 05:16:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 05:16:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 05:16:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 05:16:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 05:16:57 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 05:16:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 05:16:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 05:16:57 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 05:16:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 05:16:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 05:16:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 05:16:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 05:16:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 05:16:57 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 05:16:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T05:16:56 UTC (1769836616)
Jan 31 05:16:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 05:16:57 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 05:16:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 05:16:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 05:16:57 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 05:16:57 localhost kernel: usbhid: USB HID core driver
Jan 31 05:16:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 05:16:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 05:16:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 05:16:57 localhost kernel: Initializing XFRM netlink socket
Jan 31 05:16:57 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 05:16:57 localhost kernel: Segment Routing with IPv6
Jan 31 05:16:57 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 05:16:57 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 05:16:57 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 05:16:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 05:16:57 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 05:16:57 localhost kernel: sched_clock: Marking stable (1047001360, 141399380)->(1313433780, -125033040)
Jan 31 05:16:57 localhost kernel: registered taskstats version 1
Jan 31 05:16:57 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 05:16:57 localhost kernel: Demotion targets for Node 0: null
Jan 31 05:16:57 localhost kernel: page_owner is disabled
Jan 31 05:16:57 localhost kernel: Key type .fscrypt registered
Jan 31 05:16:57 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 05:16:57 localhost kernel: Key type big_key registered
Jan 31 05:16:57 localhost kernel: Key type encrypted registered
Jan 31 05:16:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 05:16:57 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 05:16:57 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 05:16:57 localhost kernel: ima: No architecture policies found
Jan 31 05:16:57 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 05:16:57 localhost kernel: evm: security.selinux
Jan 31 05:16:57 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 05:16:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 05:16:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 05:16:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 05:16:57 localhost kernel: evm: security.apparmor (disabled)
Jan 31 05:16:57 localhost kernel: evm: security.ima
Jan 31 05:16:57 localhost kernel: evm: security.capability
Jan 31 05:16:57 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 05:16:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 05:16:57 localhost kernel: Running certificate verification RSA selftest
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 05:16:57 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 05:16:57 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 05:16:57 localhost kernel: clk: Disabling unused clocks
Jan 31 05:16:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 05:16:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 05:16:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 05:16:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 05:16:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 05:16:57 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 05:16:57 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 05:16:57 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 05:16:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 05:16:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 05:16:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 05:16:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 05:16:57 localhost kernel: Run /init as init process
Jan 31 05:16:57 localhost kernel:   with arguments:
Jan 31 05:16:57 localhost kernel:     /init
Jan 31 05:16:57 localhost kernel:   with environment:
Jan 31 05:16:57 localhost kernel:     HOME=/
Jan 31 05:16:57 localhost kernel:     TERM=linux
Jan 31 05:16:57 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 05:16:57 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 05:16:57 localhost systemd[1]: Detected virtualization kvm.
Jan 31 05:16:57 localhost systemd[1]: Detected architecture x86-64.
Jan 31 05:16:57 localhost systemd[1]: Running in initrd.
Jan 31 05:16:57 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 05:16:57 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 05:16:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 05:16:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 05:16:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 05:16:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 05:16:57 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 05:16:57 localhost systemd[1]: Reached target Local File Systems.
Jan 31 05:16:57 localhost systemd[1]: Reached target Path Units.
Jan 31 05:16:57 localhost systemd[1]: Reached target Slice Units.
Jan 31 05:16:57 localhost systemd[1]: Reached target Swaps.
Jan 31 05:16:57 localhost systemd[1]: Reached target Timer Units.
Jan 31 05:16:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 05:16:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 05:16:57 localhost systemd[1]: Listening on Journal Socket.
Jan 31 05:16:57 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 05:16:57 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 05:16:57 localhost systemd[1]: Reached target Socket Units.
Jan 31 05:16:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 05:16:57 localhost systemd[1]: Starting Journal Service...
Jan 31 05:16:57 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 05:16:57 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 05:16:57 localhost systemd[1]: Starting Create System Users...
Jan 31 05:16:57 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 05:16:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 05:16:57 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 05:16:57 localhost systemd[1]: Finished Create System Users.
Jan 31 05:16:57 localhost systemd-journald[311]: Journal started
Jan 31 05:16:57 localhost systemd-journald[311]: Runtime Journal (/run/log/journal/96867758a14c47b49648f2b42c325de8) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:16:57 localhost systemd-sysusers[316]: Creating group 'users' with GID 100.
Jan 31 05:16:57 localhost systemd-sysusers[316]: Creating group 'dbus' with GID 81.
Jan 31 05:16:57 localhost systemd-sysusers[316]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 05:16:57 localhost systemd[1]: Started Journal Service.
Jan 31 05:16:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 05:16:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 05:16:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 05:16:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 05:16:57 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 05:16:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 05:16:57 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 05:16:57 localhost dracut-cmdline[331]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 05:16:57 localhost dracut-cmdline[331]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 05:16:57 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 05:16:57 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 05:16:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 05:16:57 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 05:16:57 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 05:16:57 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 05:16:57 localhost kernel: RPC: Registered udp transport module.
Jan 31 05:16:57 localhost kernel: RPC: Registered tcp transport module.
Jan 31 05:16:57 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 05:16:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 05:16:57 localhost rpc.statd[449]: Version 2.5.4 starting
Jan 31 05:16:57 localhost rpc.statd[449]: Initializing NSM state
Jan 31 05:16:57 localhost rpc.idmapd[454]: Setting log level to 0
Jan 31 05:16:57 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 05:16:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 05:16:58 localhost systemd-udevd[467]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 05:16:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 05:16:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 05:16:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 05:16:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 05:16:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 05:16:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:16:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 05:16:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:16:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:16:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 05:16:58 localhost systemd[1]: Reached target Network.
Jan 31 05:16:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 05:16:58 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 05:16:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 05:16:58 localhost systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:16:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 05:16:58 localhost kernel:  vda: vda1
Jan 31 05:16:58 localhost kernel: libata version 3.00 loaded.
Jan 31 05:16:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 05:16:58 localhost kernel: scsi host0: ata_piix
Jan 31 05:16:58 localhost kernel: scsi host1: ata_piix
Jan 31 05:16:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 05:16:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 05:16:58 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 05:16:58 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 05:16:58 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 05:16:58 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 05:16:58 localhost systemd[1]: Reached target System Initialization.
Jan 31 05:16:58 localhost systemd[1]: Reached target Basic System.
Jan 31 05:16:58 localhost kernel: ata1: found unknown device (class 0)
Jan 31 05:16:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 05:16:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 05:16:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 05:16:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 05:16:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 05:16:58 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 05:16:58 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 05:16:58 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 05:16:58 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 05:16:58 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 05:16:58 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 05:16:58 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 05:16:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 05:16:58 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 05:16:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 05:16:58 localhost systemd[1]: Mounting /sysroot...
Jan 31 05:16:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 05:16:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 05:16:59 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 05:16:59 localhost systemd[1]: Mounted /sysroot.
Jan 31 05:16:59 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 05:16:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 05:16:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 05:16:59 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 05:16:59 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 05:16:59 localhost systemd[1]: Starting dracut mount hook...
Jan 31 05:16:59 localhost systemd[1]: Finished dracut mount hook.
Jan 31 05:16:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 05:16:59 localhost rpc.idmapd[454]: exiting on signal 15
Jan 31 05:16:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 05:16:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 05:16:59 localhost systemd[1]: Stopped target Network.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Timer Units.
Jan 31 05:16:59 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 05:16:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Basic System.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Path Units.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Slice Units.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Socket Units.
Jan 31 05:16:59 localhost systemd[1]: Stopped target System Initialization.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Swaps.
Jan 31 05:16:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 05:16:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 05:16:59 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 05:16:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 05:16:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 05:16:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 05:16:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 05:16:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 05:16:59 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 05:16:59 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 05:16:59 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 05:16:59 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 05:16:59 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Closed udev Control Socket.
Jan 31 05:16:59 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 05:16:59 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 05:16:59 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 05:16:59 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 05:16:59 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 05:16:59 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 05:16:59 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Stopped Create System Users.
Jan 31 05:16:59 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 05:16:59 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 05:16:59 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 05:16:59 localhost systemd[1]: Reached target Switch Root.
Jan 31 05:16:59 localhost systemd[1]: Starting Switch Root...
Jan 31 05:16:59 localhost systemd[1]: Switching root.
Jan 31 05:16:59 localhost systemd-journald[311]: Journal stopped
Jan 31 05:17:00 localhost systemd-journald[311]: Received SIGTERM from PID 1 (systemd).
Jan 31 05:17:00 localhost kernel: audit: type=1404 audit(1769836619.726:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:17:00 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:17:00 localhost kernel: audit: type=1403 audit(1769836619.848:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 05:17:00 localhost systemd[1]: Successfully loaded SELinux policy in 129.094ms.
Jan 31 05:17:00 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 40.906ms.
Jan 31 05:17:00 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 05:17:00 localhost systemd[1]: Detected virtualization kvm.
Jan 31 05:17:00 localhost systemd[1]: Detected architecture x86-64.
Jan 31 05:17:00 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:17:00 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Stopped Switch Root.
Jan 31 05:17:00 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 05:17:00 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 05:17:00 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 05:17:00 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 05:17:00 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 05:17:00 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 05:17:00 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 05:17:00 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 05:17:00 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 05:17:00 localhost systemd[1]: Stopped target Switch Root.
Jan 31 05:17:00 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 05:17:00 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 05:17:00 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 05:17:00 localhost systemd[1]: Reached target Path Units.
Jan 31 05:17:00 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 05:17:00 localhost systemd[1]: Reached target Slice Units.
Jan 31 05:17:00 localhost systemd[1]: Reached target Swaps.
Jan 31 05:17:00 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 05:17:00 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 05:17:00 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 05:17:00 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 05:17:00 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 05:17:00 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 05:17:00 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 05:17:00 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 05:17:00 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 05:17:00 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 05:17:00 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 05:17:00 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 05:17:00 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 05:17:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:17:00 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 05:17:00 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 05:17:00 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 05:17:00 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 05:17:00 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 05:17:00 localhost systemd[1]: Stopped Journal Service.
Jan 31 05:17:00 localhost systemd[1]: Starting Journal Service...
Jan 31 05:17:00 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 05:17:00 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 05:17:00 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:17:00 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 05:17:00 localhost systemd-journald[682]: Journal started
Jan 31 05:17:00 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:17:00 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 05:17:00 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 05:17:00 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 05:17:00 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 05:17:00 localhost kernel: fuse: init (API version 7.37)
Jan 31 05:17:00 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 05:17:00 localhost systemd[1]: Started Journal Service.
Jan 31 05:17:00 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 05:17:00 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 05:17:00 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 05:17:00 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 05:17:00 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 05:17:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:17:00 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 05:17:00 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 05:17:00 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 05:17:00 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 05:17:00 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 05:17:00 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 05:17:00 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 05:17:00 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 05:17:00 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 05:17:00 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 05:17:00 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 05:17:00 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 05:17:00 localhost systemd[1]: Starting Create System Users...
Jan 31 05:17:00 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 05:17:00 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 05:17:00 localhost systemd-journald[682]: Received client request to flush runtime journal.
Jan 31 05:17:00 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 05:17:00 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 05:17:00 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 05:17:00 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 05:17:00 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 05:17:00 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 05:17:00 localhost systemd[1]: Finished Create System Users.
Jan 31 05:17:00 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 05:17:00 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 05:17:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 05:17:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 05:17:01 localhost systemd[1]: Reached target Local File Systems.
Jan 31 05:17:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 05:17:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 05:17:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 05:17:01 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 05:17:01 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 05:17:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 05:17:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 05:17:01 localhost bootctl[700]: Couldn't find EFI system partition, skipping.
Jan 31 05:17:01 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 05:17:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 05:17:01 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 05:17:01 localhost systemd[1]: Starting RPC Bind...
Jan 31 05:17:01 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 05:17:01 localhost auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 05:17:01 localhost auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 05:17:01 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 05:17:01 localhost systemd[1]: Started RPC Bind.
Jan 31 05:17:01 localhost augenrules[711]: /sbin/augenrules: No change
Jan 31 05:17:01 localhost augenrules[726]: No rules
Jan 31 05:17:01 localhost augenrules[726]: enabled 1
Jan 31 05:17:01 localhost augenrules[726]: failure 1
Jan 31 05:17:01 localhost augenrules[726]: pid 706
Jan 31 05:17:01 localhost augenrules[726]: rate_limit 0
Jan 31 05:17:01 localhost augenrules[726]: backlog_limit 8192
Jan 31 05:17:01 localhost augenrules[726]: lost 0
Jan 31 05:17:01 localhost augenrules[726]: backlog 2
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 05:17:01 localhost augenrules[726]: enabled 1
Jan 31 05:17:01 localhost augenrules[726]: failure 1
Jan 31 05:17:01 localhost augenrules[726]: pid 706
Jan 31 05:17:01 localhost augenrules[726]: rate_limit 0
Jan 31 05:17:01 localhost augenrules[726]: backlog_limit 8192
Jan 31 05:17:01 localhost augenrules[726]: lost 0
Jan 31 05:17:01 localhost augenrules[726]: backlog 2
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 05:17:01 localhost augenrules[726]: enabled 1
Jan 31 05:17:01 localhost augenrules[726]: failure 1
Jan 31 05:17:01 localhost augenrules[726]: pid 706
Jan 31 05:17:01 localhost augenrules[726]: rate_limit 0
Jan 31 05:17:01 localhost augenrules[726]: backlog_limit 8192
Jan 31 05:17:01 localhost augenrules[726]: lost 0
Jan 31 05:17:01 localhost augenrules[726]: backlog 1
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 05:17:01 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 05:17:01 localhost systemd[1]: Started Security Auditing Service.
Jan 31 05:17:01 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 05:17:01 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 05:17:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 05:17:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 05:17:01 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 05:17:01 localhost systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 05:17:01 localhost systemd[1]: Starting Update is Completed...
Jan 31 05:17:01 localhost systemd[1]: Finished Update is Completed.
Jan 31 05:17:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 05:17:01 localhost systemd[1]: Reached target System Initialization.
Jan 31 05:17:01 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 05:17:01 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 05:17:01 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 05:17:01 localhost systemd[1]: Reached target Timer Units.
Jan 31 05:17:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 05:17:01 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 05:17:01 localhost systemd[1]: Reached target Socket Units.
Jan 31 05:17:01 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 05:17:01 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:17:01 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 05:17:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 05:17:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 05:17:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 05:17:01 localhost systemd-udevd[757]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:17:01 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 05:17:01 localhost systemd[1]: Reached target Basic System.
Jan 31 05:17:01 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 05:17:01 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 05:17:01 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 05:17:01 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 05:17:01 localhost dbus-broker-lau[772]: Ready
Jan 31 05:17:02 localhost systemd[1]: Starting NTP client/server...
Jan 31 05:17:02 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 05:17:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 05:17:02 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 05:17:02 localhost systemd[1]: Started irqbalance daemon.
Jan 31 05:17:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 05:17:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:17:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:17:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 05:17:02 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 05:17:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 05:17:02 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 05:17:02 localhost systemd[1]: Starting User Login Management...
Jan 31 05:17:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 05:17:02 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 05:17:02 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 05:17:02 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 05:17:02 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 05:17:02 localhost chronyd[806]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 05:17:02 localhost chronyd[806]: Loaded 0 symmetric keys
Jan 31 05:17:02 localhost chronyd[806]: Using right/UTC timezone to obtain leap second data
Jan 31 05:17:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 05:17:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 05:17:02 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 05:17:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 05:17:02 localhost kernel: [drm] features: -context_init
Jan 31 05:17:02 localhost systemd[1]: Started NTP client/server.
Jan 31 05:17:02 localhost chronyd[806]: Loaded seccomp filter (level 2)
Jan 31 05:17:02 localhost kernel: [drm] number of scanouts: 1
Jan 31 05:17:02 localhost kernel: [drm] number of cap sets: 0
Jan 31 05:17:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 05:17:02 localhost systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 05:17:02 localhost systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 05:17:02 localhost systemd-logind[797]: New seat seat0.
Jan 31 05:17:02 localhost systemd[1]: Started User Login Management.
Jan 31 05:17:02 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 05:17:02 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 05:17:02 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 05:17:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 05:17:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 05:17:02 localhost iptables.init[792]: iptables: Applying firewall rules: [  OK  ]
Jan 31 05:17:02 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 05:17:02 localhost cloud-init[843]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 05:17:02 +0000. Up 7.26 seconds.
Jan 31 05:17:03 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 05:17:03 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 05:17:03 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpm6ia9ifk.mount: Deactivated successfully.
Jan 31 05:17:03 localhost systemd[1]: Starting Hostname Service...
Jan 31 05:17:03 localhost systemd[1]: Started Hostname Service.
Jan 31 05:17:03 np0005603492.novalocal systemd-hostnamed[857]: Hostname set to <np0005603492.novalocal> (static)
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Starting Network Manager...
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4436] NetworkManager (version 1.54.3-2.el9) is starting... (boot:d710c23e-4c03-4b1f-8d92-73ee5945a2a2)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4443] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4616] manager[0x55bad8535000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4662] hostname: hostname: using hostnamed
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4663] hostname: static hostname changed from (none) to "np0005603492.novalocal"
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4671] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4833] manager[0x55bad8535000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4834] manager[0x55bad8535000]: rfkill: WWAN hardware radio set enabled
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4926] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4926] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4927] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4927] manager: Networking is enabled by state file
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4929] settings: Loaded settings plugin: keyfile (internal)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4958] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4984] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.4997] dhcp: init: Using DHCP client 'internal'
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5002] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5015] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5026] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5037] device (lo): Activation: starting connection 'lo' (b9eb3add-3ec9-4938-9bcc-ef8bce8c9429)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5045] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5048] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5086] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5095] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Started Network Manager.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5099] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5103] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5107] device (eth0): carrier: link connected
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5113] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Reached target Network.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5124] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5134] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5141] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5143] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5146] manager: NetworkManager state is now CONNECTING
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5149] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5163] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5167] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5213] dhcp4 (eth0): state changed new lease, address=38.102.83.30
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5224] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5252] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5379] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5383] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5392] device (lo): Activation: successful, device activated.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5419] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5422] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5427] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5432] device (eth0): Activation: successful, device activated.
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5439] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 05:17:03 np0005603492.novalocal NetworkManager[861]: <info>  [1769836623.5444] manager: startup complete
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Reached target NFS client services.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 05:17:03 np0005603492.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 05:17:03 +0000. Up 8.30 seconds.
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.30         | 255.255.255.0 | global | fa:16:3e:6a:e3:0d |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe6a:e30d/64 |       .       |  link  | fa:16:3e:6a:e3:0d |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 05:17:03 np0005603492.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: new group: name=cloud-user, GID=1001
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: add 'cloud-user' to group 'adm'
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: add 'cloud-user' to group 'systemd-journal'
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: add 'cloud-user' to shadow group 'adm'
Jan 31 05:17:04 np0005603492.novalocal useradd[987]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Generating public/private rsa key pair.
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: SHA256:MDkHz/HDj4FBptq259wCQ1hr95fW3LgqBlLfVT79SPU root@np0005603492.novalocal
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +---[RSA 3072]----+
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |      ..=        |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |      .B *      o|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |     o*.= =    +o|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |    .o+=o  =  o.E|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |    .oooSo...* +o|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |     .+.. o = = o|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |      .+.. o   . |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |       +..o   .  |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |        oo....   |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: SHA256:tmX3U3RmDluKOVLRijtHdG67R7DfvSOVlE5wsQst8xk root@np0005603492.novalocal
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +---[ECDSA 256]---+
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |            .. ..|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |            .o+..|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |           o.B+EB|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |          ..ooO&=|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |        S +o=.*B+|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |       . +oo.oo=.|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |        .  o  ++o|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |             ..o=|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |              .oo|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key fingerprint is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: SHA256:JNa7EA+jYaq6KoY0epG2HQ8E915LEm6cRjZZVj8NXF8 root@np0005603492.novalocal
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: The key's randomart image is:
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +--[ED25519 256]--+
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |       oo...... E|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |  . . *o   ..o ..|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |   oo**+o   o . .|
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |   o.+OBo.   .   |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |  .o.+.+S.       |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: | ++ o ....       |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |=..+ +  .        |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |=.o . .          |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: |Bo               |
Jan 31 05:17:05 np0005603492.novalocal cloud-init[920]: +----[SHA256]-----+
Jan 31 05:17:05 np0005603492.novalocal sm-notify[1003]: Version 2.5.4 starting
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 05:17:05 np0005603492.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 05:17:05 np0005603492.novalocal sshd[1005]: Server listening on :: port 22.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Reached target Network is Online.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting System Logging Service...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started Command Scheduler.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started Getty on tty1.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 05:17:05 np0005603492.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Jan 31 05:17:05 np0005603492.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 05:17:05 np0005603492.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 41% if used.)
Jan 31 05:17:05 np0005603492.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 05:17:05 np0005603492.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Jan 31 05:17:05 np0005603492.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Started System Logging Service.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 05:17:05 np0005603492.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:17:05 np0005603492.novalocal kdumpctl[1013]: kdump: No kdump initial ramdisk found.
Jan 31 05:17:05 np0005603492.novalocal kdumpctl[1013]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1107]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 05:17:05 +0000. Up 9.92 seconds.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 05:17:05 np0005603492.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1179]: Connection reset by 38.102.83.114 port 36364 [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1201]: Unable to negotiate with 38.102.83.114 port 36370: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1208]: Connection reset by 38.102.83.114 port 36380 [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1222]: Unable to negotiate with 38.102.83.114 port 36394: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1229]: Unable to negotiate with 38.102.83.114 port 36400: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1239]: Connection reset by 38.102.83.114 port 36414 [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1253]: Connection reset by 38.102.83.114 port 36416 [preauth]
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1262]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 05:17:05 +0000. Up 10.26 seconds.
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1266]: Unable to negotiate with 38.102.83.114 port 36424: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 05:17:05 np0005603492.novalocal sshd-session[1276]: Unable to negotiate with 38.102.83.114 port 36436: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 05:17:05 np0005603492.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1302]: #############################################################
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1305]: 256 SHA256:tmX3U3RmDluKOVLRijtHdG67R7DfvSOVlE5wsQst8xk root@np0005603492.novalocal (ECDSA)
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1307]: 256 SHA256:JNa7EA+jYaq6KoY0epG2HQ8E915LEm6cRjZZVj8NXF8 root@np0005603492.novalocal (ED25519)
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1309]: 3072 SHA256:MDkHz/HDj4FBptq259wCQ1hr95fW3LgqBlLfVT79SPU root@np0005603492.novalocal (RSA)
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1310]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 05:17:05 np0005603492.novalocal cloud-init[1311]: #############################################################
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 05:17:06 np0005603492.novalocal cloud-init[1262]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 05:17:06 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.51 seconds
Jan 31 05:17:06 np0005603492.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 05:17:06 np0005603492.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 05:17:06 np0005603492.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: memstrack is not available
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: memstrack is not available
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 05:17:07 np0005603492.novalocal dracut[1287]: *** Including module: systemd ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: fips ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: i18n ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: drm ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: prefixdevname ***
Jan 31 05:17:08 np0005603492.novalocal dracut[1287]: *** Including module: kernel-modules ***
Jan 31 05:17:09 np0005603492.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 05:17:09 np0005603492.novalocal chronyd[806]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 05:17:09 np0005603492.novalocal chronyd[806]: System clock TAI offset set to 37 seconds
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: qemu ***
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: fstab-sys ***
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: rootfs-block ***
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: terminfo ***
Jan 31 05:17:09 np0005603492.novalocal dracut[1287]: *** Including module: udev-rules ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: virtiofs ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: usrmount ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: base ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: fs-lib ***
Jan 31 05:17:10 np0005603492.novalocal dracut[1287]: *** Including module: kdumpbase ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Including module: openssl ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Including module: shutdown ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Including module: squash ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Including modules done ***
Jan 31 05:17:11 np0005603492.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Jan 31 05:17:12 np0005603492.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 35 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 33 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 34 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 05:17:12 np0005603492.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Jan 31 05:17:12 np0005603492.novalocal dracut[1287]: *** Resolving executable dependencies ***
Jan 31 05:17:13 np0005603492.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: *** Store current command line parameters ***
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: Stored kernel commandline:
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: *** Install squash loader ***
Jan 31 05:17:14 np0005603492.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: *** Hardlinking files ***
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Mode:           real
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Files:          50
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Linked:         0 files
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Compared:       0 xattrs
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Compared:       0 files
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Saved:          0 B
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: Duration:       0.000656 seconds
Jan 31 05:17:15 np0005603492.novalocal dracut[1287]: *** Hardlinking files done ***
Jan 31 05:17:16 np0005603492.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 05:17:16 np0005603492.novalocal kdumpctl[1013]: kdump: kexec: loaded kdump kernel
Jan 31 05:17:16 np0005603492.novalocal kdumpctl[1013]: kdump: Starting kdump: [OK]
Jan 31 05:17:16 np0005603492.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 05:17:16 np0005603492.novalocal systemd[1]: Startup finished in 1.472s (kernel) + 2.702s (initrd) + 16.973s (userspace) = 21.147s.
Jan 31 05:17:24 np0005603492.novalocal sshd-session[4301]: Accepted publickey for zuul from 38.102.83.114 port 53378 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 05:17:24 np0005603492.novalocal systemd-logind[797]: New session 1 of user zuul.
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Queued start job for default target Main User Target.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Created slice User Application Slice.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Reached target Paths.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Reached target Timers.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Starting D-Bus User Message Bus Socket...
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Starting Create User's Volatile Files and Directories...
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Finished Create User's Volatile Files and Directories.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Listening on D-Bus User Message Bus Socket.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Reached target Sockets.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Reached target Basic System.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Reached target Main User Target.
Jan 31 05:17:24 np0005603492.novalocal systemd[4305]: Startup finished in 122ms.
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 05:17:24 np0005603492.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 05:17:24 np0005603492.novalocal sshd-session[4301]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:17:25 np0005603492.novalocal python3[4387]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:17:27 np0005603492.novalocal python3[4415]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:17:33 np0005603492.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 05:17:34 np0005603492.novalocal python3[4475]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:17:35 np0005603492.novalocal python3[4515]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 05:17:37 np0005603492.novalocal python3[4541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwIvwGagiSyrUVBkJDcYXmLTZIUzemcH9irXyUTC+11DBF8v1XmrGEbtW0DXA092VtAsahyMGNsynSxzBgDHW9aSsutmZQtVJDPnWgeUPHUM/KdRtGNHhv9YhTPVmTFCjjTpWG68yvavDtyQn5woWMJf96wHOTMgDdbFTmX0xF31yXDyuwR0d8u18HOMA3SDiJUtd2D2w3uOOqox+yQGOZc1cpDWUgmR+xBGX4oUIaoiCnaMANl0qq8YMdGelMY9l9rIWKtLrelQdKhEEmMmK5F1mmuIEWOCO0rYL5cofHDd3aCzcBJx2JmKTjGu6HnI6p3hO0GJs7tf9RnmLawp2CmofMlgoPhcgxUIBOCG4z4rtad0Dg2emGE6I2g8L3gdi88sEOvSVH2KleXRyAcBHWq0laaXN1GmD0ljrv3CO1Kwn1R1Z6DK/p+/LDdWFVSTZTTzsCse1kT2zsUkCNELEzj6n7TTSTlT2x5jE/Ul083y1+ZaB5fsVduQtvVhuPU5c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:37 np0005603492.novalocal python3[4565]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:38 np0005603492.novalocal python3[4664]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:17:38 np0005603492.novalocal python3[4735]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769836658.0053632-207-89503624735907/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=22b1f2fefdc94ba08335795f81edd8c2_id_rsa follow=False checksum=78d3fbaa9a37c76a4584722c2f944481c0ce01d4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:39 np0005603492.novalocal python3[4858]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:17:39 np0005603492.novalocal python3[4929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769836659.0272472-240-183872731187495/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=22b1f2fefdc94ba08335795f81edd8c2_id_rsa.pub follow=False checksum=608741a273f80075f3f7f0490ccf2d2de6ec057e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:41 np0005603492.novalocal python3[4977]: ansible-ping Invoked with data=pong
Jan 31 05:17:42 np0005603492.novalocal python3[5001]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:17:44 np0005603492.novalocal python3[5059]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 05:17:45 np0005603492.novalocal python3[5091]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:46 np0005603492.novalocal python3[5115]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:46 np0005603492.novalocal python3[5139]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:46 np0005603492.novalocal python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:46 np0005603492.novalocal python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:47 np0005603492.novalocal python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:48 np0005603492.novalocal sudo[5235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhdcrpltheplhvxscxjlutbozhhdnjf ; /usr/bin/python3'
Jan 31 05:17:48 np0005603492.novalocal sudo[5235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:17:48 np0005603492.novalocal python3[5237]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:48 np0005603492.novalocal sudo[5235]: pam_unix(sudo:session): session closed for user root
Jan 31 05:17:49 np0005603492.novalocal sudo[5313]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qodrrcivriqjucbdnjqspbrmmkjkdpii ; /usr/bin/python3'
Jan 31 05:17:49 np0005603492.novalocal sudo[5313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:17:49 np0005603492.novalocal python3[5315]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:17:49 np0005603492.novalocal sudo[5313]: pam_unix(sudo:session): session closed for user root
Jan 31 05:17:49 np0005603492.novalocal sudo[5386]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sweuulmojcrbjqoxkxwdqhdjwjosltru ; /usr/bin/python3'
Jan 31 05:17:49 np0005603492.novalocal sudo[5386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:17:49 np0005603492.novalocal python3[5388]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769836668.9875844-21-11517367059437/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:17:49 np0005603492.novalocal sudo[5386]: pam_unix(sudo:session): session closed for user root
Jan 31 05:17:50 np0005603492.novalocal python3[5436]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:50 np0005603492.novalocal python3[5460]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:51 np0005603492.novalocal python3[5484]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:51 np0005603492.novalocal python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:51 np0005603492.novalocal python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:51 np0005603492.novalocal python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:52 np0005603492.novalocal python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:52 np0005603492.novalocal python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:52 np0005603492.novalocal python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:52 np0005603492.novalocal python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:53 np0005603492.novalocal python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:53 np0005603492.novalocal python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:53 np0005603492.novalocal python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:53 np0005603492.novalocal python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:54 np0005603492.novalocal python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:54 np0005603492.novalocal python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:54 np0005603492.novalocal python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:54 np0005603492.novalocal python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:55 np0005603492.novalocal python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:55 np0005603492.novalocal python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:55 np0005603492.novalocal python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:55 np0005603492.novalocal python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:56 np0005603492.novalocal python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:56 np0005603492.novalocal python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:56 np0005603492.novalocal python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:57 np0005603492.novalocal python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:17:59 np0005603492.novalocal sudo[6060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyxtrzzqevymaqbbykrtxugsttkzelx ; /usr/bin/python3'
Jan 31 05:17:59 np0005603492.novalocal sudo[6060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:00 np0005603492.novalocal python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 05:18:00 np0005603492.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 05:18:00 np0005603492.novalocal systemd[1]: Started Time & Date Service.
Jan 31 05:18:00 np0005603492.novalocal systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Jan 31 05:18:00 np0005603492.novalocal sudo[6060]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:00 np0005603492.novalocal sudo[6091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxrunihcokcjqlalhhdfyiqgoggtbgz ; /usr/bin/python3'
Jan 31 05:18:00 np0005603492.novalocal sudo[6091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:00 np0005603492.novalocal python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:00 np0005603492.novalocal sudo[6091]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:01 np0005603492.novalocal python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:18:01 np0005603492.novalocal python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769836681.121412-153-223824909725111/source _original_basename=tmpsz65n9nm follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:02 np0005603492.novalocal python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:18:02 np0005603492.novalocal python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769836682.059712-183-174258698480596/source _original_basename=tmpn0s7qdmg follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:03 np0005603492.novalocal sudo[6511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpgxlsrlynahbnbiwrsehxkimgaikenv ; /usr/bin/python3'
Jan 31 05:18:03 np0005603492.novalocal sudo[6511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:03 np0005603492.novalocal python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:18:03 np0005603492.novalocal sudo[6511]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:03 np0005603492.novalocal sudo[6584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivujlttyahqgoszzuqcwvpyjwqzewjen ; /usr/bin/python3'
Jan 31 05:18:03 np0005603492.novalocal sudo[6584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:03 np0005603492.novalocal python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769836683.2023196-231-237233927189068/source _original_basename=tmpnovm541s follow=False checksum=08e43183db11613368a518f4478807de9b29fcbd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:03 np0005603492.novalocal sudo[6584]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:04 np0005603492.novalocal python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:18:04 np0005603492.novalocal python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:18:05 np0005603492.novalocal sudo[6738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egnxgjtsgadvxmxbzybjjsygpxbxrply ; /usr/bin/python3'
Jan 31 05:18:05 np0005603492.novalocal sudo[6738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:05 np0005603492.novalocal python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:18:05 np0005603492.novalocal sudo[6738]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:05 np0005603492.novalocal sudo[6811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqpsegfjyutfnzfrfqiflabprvcwhivp ; /usr/bin/python3'
Jan 31 05:18:05 np0005603492.novalocal sudo[6811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:05 np0005603492.novalocal python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769836685.0732365-273-61647626544556/source _original_basename=tmp2ewu7qql follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:05 np0005603492.novalocal sudo[6811]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:06 np0005603492.novalocal sudo[6862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpjfuwzmoqbjwnhmnzvedpqaawmotum ; /usr/bin/python3'
Jan 31 05:18:06 np0005603492.novalocal sudo[6862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:06 np0005603492.novalocal python3[6864]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-4c35-b7a4-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:18:06 np0005603492.novalocal sudo[6862]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:06 np0005603492.novalocal python3[6892]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-4c35-b7a4-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 05:18:08 np0005603492.novalocal python3[6920]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:16 np0005603492.novalocal chronyd[806]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Jan 31 05:18:24 np0005603492.novalocal sudo[6944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozsoaccwfsndsazzlgntjeesuudzhpa ; /usr/bin/python3'
Jan 31 05:18:24 np0005603492.novalocal sudo[6944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:18:24 np0005603492.novalocal python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:18:24 np0005603492.novalocal sudo[6944]: pam_unix(sudo:session): session closed for user root
Jan 31 05:18:30 np0005603492.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 05:18:57 np0005603492.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 05:18:57 np0005603492.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7499] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 05:18:57 np0005603492.novalocal systemd-udevd[6950]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7640] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7665] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7668] device (eth1): carrier: link connected
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7670] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7674] policy: auto-activating connection 'Wired connection 1' (62391001-882a-341e-86a2-0ed7a9c123d7)
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7678] device (eth1): Activation: starting connection 'Wired connection 1' (62391001-882a-341e-86a2-0ed7a9c123d7)
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7679] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7682] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7685] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:18:57 np0005603492.novalocal NetworkManager[861]: <info>  [1769836737.7689] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:18:58 np0005603492.novalocal python3[6976]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-2e3e-d606-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:19:08 np0005603492.novalocal sudo[7054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkatzapstwjnunvukyxpftjyqlsxgjvs ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 05:19:08 np0005603492.novalocal sudo[7054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:19:08 np0005603492.novalocal python3[7056]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:19:08 np0005603492.novalocal sudo[7054]: pam_unix(sudo:session): session closed for user root
Jan 31 05:19:08 np0005603492.novalocal sudo[7127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgulbuxnruslpfpblxuurphqyrpevyjl ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 05:19:08 np0005603492.novalocal sudo[7127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:19:09 np0005603492.novalocal python3[7129]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769836748.4706154-102-177637499049567/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cda303af5cf9877125744d826b7aa00218dac00e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:19:09 np0005603492.novalocal sudo[7127]: pam_unix(sudo:session): session closed for user root
Jan 31 05:19:09 np0005603492.novalocal sudo[7177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrauywgglrmwjynuortnmkogmkwntzre ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 05:19:09 np0005603492.novalocal sudo[7177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:19:09 np0005603492.novalocal python3[7179]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Stopping Network Manager...
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.7823] caught SIGTERM, shutting down normally.
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.7829] dhcp4 (eth0): canceled DHCP transaction
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.7829] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.7830] dhcp4 (eth0): state changed no lease
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.7833] manager: NetworkManager state is now CONNECTING
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.8031] dhcp4 (eth1): canceled DHCP transaction
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.8031] dhcp4 (eth1): state changed no lease
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[861]: <info>  [1769836749.8081] exiting (success)
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Stopped Network Manager.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: NetworkManager.service: Consumed 1.015s CPU time, 10.0M memory peak.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Starting Network Manager...
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.8658] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d710c23e-4c03-4b1f-8d92-73ee5945a2a2)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.8660] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.8729] manager[0x5608179ad000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Starting Hostname Service...
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Started Hostname Service.
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9358] hostname: hostname: using hostnamed
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9359] hostname: static hostname changed from (none) to "np0005603492.novalocal"
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9366] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9372] manager[0x5608179ad000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9373] manager[0x5608179ad000]: rfkill: WWAN hardware radio set enabled
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9413] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9414] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9414] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9415] manager: Networking is enabled by state file
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9418] settings: Loaded settings plugin: keyfile (internal)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9424] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9473] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9486] dhcp: init: Using DHCP client 'internal'
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9491] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9498] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9508] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9522] device (lo): Activation: starting connection 'lo' (b9eb3add-3ec9-4938-9bcc-ef8bce8c9429)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9532] device (eth0): carrier: link connected
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9538] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9547] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9548] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9560] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9573] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9580] device (eth1): carrier: link connected
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9586] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9596] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (62391001-882a-341e-86a2-0ed7a9c123d7) (indicated)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9596] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9605] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9619] device (eth1): Activation: starting connection 'Wired connection 1' (62391001-882a-341e-86a2-0ed7a9c123d7)
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Started Network Manager.
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9630] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9654] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9657] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9659] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9661] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9664] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9666] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9667] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9669] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9674] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9676] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9682] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9684] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9703] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9704] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 05:19:09 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836749.9707] device (lo): Activation: successful, device activated.
Jan 31 05:19:09 np0005603492.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 05:19:09 np0005603492.novalocal sudo[7177]: pam_unix(sudo:session): session closed for user root
Jan 31 05:19:10 np0005603492.novalocal python3[7245]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-2e3e-d606-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8630] dhcp4 (eth0): state changed new lease, address=38.102.83.30
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8642] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8731] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8755] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8758] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8763] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8768] device (eth0): Activation: successful, device activated.
Jan 31 05:19:11 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836751.8777] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 05:19:21 np0005603492.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:19:39 np0005603492.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5384] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 05:19:55 np0005603492.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:19:55 np0005603492.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5652] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5654] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5663] device (eth1): Activation: successful, device activated.
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5672] manager: startup complete
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5674] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <warn>  [1769836795.5682] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5692] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5821] dhcp4 (eth1): canceled DHCP transaction
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5822] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5822] dhcp4 (eth1): state changed no lease
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5840] policy: auto-activating connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20)
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5846] device (eth1): Activation: starting connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20)
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5847] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5852] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5862] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5875] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5926] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5929] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:19:55 np0005603492.novalocal NetworkManager[7189]: <info>  [1769836795.5936] device (eth1): Activation: successful, device activated.
Jan 31 05:20:05 np0005603492.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:20:07 np0005603492.novalocal systemd[4305]: Starting Mark boot as successful...
Jan 31 05:20:07 np0005603492.novalocal systemd[4305]: Finished Mark boot as successful.
Jan 31 05:20:09 np0005603492.novalocal sudo[7368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwjcaafdnxrgxhoxqafwromeabsdzkm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 05:20:09 np0005603492.novalocal sudo[7368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:20:09 np0005603492.novalocal python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:20:09 np0005603492.novalocal sudo[7368]: pam_unix(sudo:session): session closed for user root
Jan 31 05:20:10 np0005603492.novalocal sudo[7441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxxxujmresbuvbjaaviefvoquqbybglv ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 05:20:10 np0005603492.novalocal sudo[7441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:20:10 np0005603492.novalocal python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769836809.577323-267-54826992576950/source _original_basename=tmpvr8mouk8 follow=False checksum=17fb47d1d3588c8c62c2cda85be1992cc3ecea03 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:20:10 np0005603492.novalocal sudo[7441]: pam_unix(sudo:session): session closed for user root
Jan 31 05:20:25 np0005603492.novalocal chronyd[806]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 05:21:10 np0005603492.novalocal sshd-session[4314]: Received disconnect from 38.102.83.114 port 53378:11: disconnected by user
Jan 31 05:21:10 np0005603492.novalocal sshd-session[4314]: Disconnected from user zuul 38.102.83.114 port 53378
Jan 31 05:21:10 np0005603492.novalocal sshd-session[4301]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:21:10 np0005603492.novalocal systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Jan 31 05:23:07 np0005603492.novalocal systemd[4305]: Created slice User Background Tasks Slice.
Jan 31 05:23:07 np0005603492.novalocal systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 05:23:07 np0005603492.novalocal systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 05:27:53 np0005603492.novalocal sshd-session[7474]: Accepted publickey for zuul from 38.102.83.114 port 42128 ssh2: RSA SHA256:nLI9W8FlAkHSY0pJrzeKIqjEMoolvwyb6dlyVD5ZrF8
Jan 31 05:27:53 np0005603492.novalocal systemd-logind[797]: New session 3 of user zuul.
Jan 31 05:27:53 np0005603492.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 05:27:53 np0005603492.novalocal sshd-session[7474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:27:53 np0005603492.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jabbnwcbvrpwasnenvrpqvxigbcgcoec ; /usr/bin/python3'
Jan 31 05:27:53 np0005603492.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:53 np0005603492.novalocal python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-7ee1-fb68-00000000216b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:27:53 np0005603492.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:54 np0005603492.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uibtndffjjnepfvmymmraxkytzkxfijq ; /usr/bin/python3'
Jan 31 05:27:54 np0005603492.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:54 np0005603492.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:54 np0005603492.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:54 np0005603492.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozrzakzezahvxutcainezzdagzivczyr ; /usr/bin/python3'
Jan 31 05:27:54 np0005603492.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:54 np0005603492.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:54 np0005603492.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:55 np0005603492.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnvwgqoqbrpxglkkvhqdkmuztlfrpmjb ; /usr/bin/python3'
Jan 31 05:27:55 np0005603492.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:55 np0005603492.novalocal python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:55 np0005603492.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:55 np0005603492.novalocal sudo[7608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcnykyffaaveikmutniqeemkprljnsku ; /usr/bin/python3'
Jan 31 05:27:55 np0005603492.novalocal sudo[7608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:55 np0005603492.novalocal python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:55 np0005603492.novalocal sudo[7608]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:55 np0005603492.novalocal sudo[7634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfocgsrxdoctuyvpqqgsutlvmrptheeu ; /usr/bin/python3'
Jan 31 05:27:55 np0005603492.novalocal sudo[7634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:56 np0005603492.novalocal python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:56 np0005603492.novalocal sudo[7634]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:56 np0005603492.novalocal sudo[7712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slhhdjfrmatbvturzglaneovharxugdq ; /usr/bin/python3'
Jan 31 05:27:56 np0005603492.novalocal sudo[7712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:56 np0005603492.novalocal python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:27:56 np0005603492.novalocal sudo[7712]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:56 np0005603492.novalocal sudo[7785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idotwmhwdprokkdpkqqrkupugiufdwbp ; /usr/bin/python3'
Jan 31 05:27:56 np0005603492.novalocal sudo[7785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:56 np0005603492.novalocal python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769837276.2913408-497-75925591553591/source _original_basename=tmpzfpvtuqn follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:27:56 np0005603492.novalocal sudo[7785]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:57 np0005603492.novalocal sudo[7835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdwbjnqaoucyfcfwafgvdtlikpukttmw ; /usr/bin/python3'
Jan 31 05:27:57 np0005603492.novalocal sudo[7835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:57 np0005603492.novalocal python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 05:27:57 np0005603492.novalocal systemd[1]: Reloading.
Jan 31 05:27:58 np0005603492.novalocal systemd-rc-local-generator[7858]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:27:58 np0005603492.novalocal sudo[7835]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:59 np0005603492.novalocal sudo[7892]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzbgswnwvwvihaldysttbopbnaeucqcz ; /usr/bin/python3'
Jan 31 05:27:59 np0005603492.novalocal sudo[7892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:27:59 np0005603492.novalocal python3[7894]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 05:27:59 np0005603492.novalocal sudo[7892]: pam_unix(sudo:session): session closed for user root
Jan 31 05:27:59 np0005603492.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frltuuduubvrhxgjpsqmtcgddkqodqsf ; /usr/bin/python3'
Jan 31 05:27:59 np0005603492.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:28:00 np0005603492.novalocal python3[7920]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:28:00 np0005603492.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Jan 31 05:28:00 np0005603492.novalocal sudo[7946]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pibrloxznrbshjcbsjspzisxervngztm ; /usr/bin/python3'
Jan 31 05:28:00 np0005603492.novalocal sudo[7946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:28:00 np0005603492.novalocal python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:28:00 np0005603492.novalocal sudo[7946]: pam_unix(sudo:session): session closed for user root
Jan 31 05:28:00 np0005603492.novalocal sudo[7974]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvidoicfbsxzgxutdiqtnljerfetwboi ; /usr/bin/python3'
Jan 31 05:28:00 np0005603492.novalocal sudo[7974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:28:00 np0005603492.novalocal python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:28:00 np0005603492.novalocal sudo[7974]: pam_unix(sudo:session): session closed for user root
Jan 31 05:28:00 np0005603492.novalocal sudo[8002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggilxbdfyqzkabdxedwxfffsskruonog ; /usr/bin/python3'
Jan 31 05:28:00 np0005603492.novalocal sudo[8002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:28:00 np0005603492.novalocal python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:28:00 np0005603492.novalocal sudo[8002]: pam_unix(sudo:session): session closed for user root
Jan 31 05:28:01 np0005603492.novalocal python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-7ee1-fb68-000000002172-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:28:01 np0005603492.novalocal python3[8061]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:28:03 np0005603492.novalocal sshd-session[7477]: Connection closed by 38.102.83.114 port 42128
Jan 31 05:28:03 np0005603492.novalocal sshd-session[7474]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:28:03 np0005603492.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 05:28:03 np0005603492.novalocal systemd[1]: session-3.scope: Consumed 3.516s CPU time.
Jan 31 05:28:03 np0005603492.novalocal systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Jan 31 05:28:03 np0005603492.novalocal systemd-logind[797]: Removed session 3.
Jan 31 05:28:05 np0005603492.novalocal sshd-session[8066]: Accepted publickey for zuul from 38.102.83.114 port 57052 ssh2: RSA SHA256:nLI9W8FlAkHSY0pJrzeKIqjEMoolvwyb6dlyVD5ZrF8
Jan 31 05:28:05 np0005603492.novalocal systemd-logind[797]: New session 4 of user zuul.
Jan 31 05:28:05 np0005603492.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 05:28:05 np0005603492.novalocal sshd-session[8066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:28:05 np0005603492.novalocal sudo[8093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htrplhykbarsanyczpykafxjkhrlmdez ; /usr/bin/python3'
Jan 31 05:28:05 np0005603492.novalocal sudo[8093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:28:05 np0005603492.novalocal python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:28:20 np0005603492.novalocal setsebool[8137]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 05:28:20 np0005603492.novalocal setsebool[8137]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:28:31 np0005603492.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:28:40 np0005603492.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:28:57 np0005603492.novalocal dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 05:28:58 np0005603492.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:28:58 np0005603492.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:28:58 np0005603492.novalocal systemd[1]: Reloading.
Jan 31 05:28:58 np0005603492.novalocal systemd-rc-local-generator[8900]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:28:58 np0005603492.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:28:59 np0005603492.novalocal sudo[8093]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:22 np0005603492.novalocal python3[21995]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-626e-128b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:29:22 np0005603492.novalocal kernel: evm: overlay not supported
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: Starting D-Bus User Message Bus...
Jan 31 05:29:22 np0005603492.novalocal dbus-broker-launch[22533]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 05:29:22 np0005603492.novalocal dbus-broker-launch[22533]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: Started D-Bus User Message Bus.
Jan 31 05:29:22 np0005603492.novalocal dbus-broker-lau[22533]: Ready
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: Created slice Slice /user.
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: podman-22445.scope: unit configures an IP firewall, but not running as root.
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 05:29:22 np0005603492.novalocal systemd[4305]: Started podman-22445.scope.
Jan 31 05:29:23 np0005603492.novalocal systemd[4305]: Started podman-pause-9debed7f.scope.
Jan 31 05:29:24 np0005603492.novalocal sudo[23276]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myvzwlktqgzvhdaedwcvbkhcsnzceozk ; /usr/bin/python3'
Jan 31 05:29:24 np0005603492.novalocal sudo[23276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:24 np0005603492.novalocal python3[23298]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.74:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.74:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:29:24 np0005603492.novalocal python3[23298]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 05:29:24 np0005603492.novalocal sudo[23276]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:24 np0005603492.novalocal sshd-session[8069]: Connection closed by 38.102.83.114 port 57052
Jan 31 05:29:24 np0005603492.novalocal sshd-session[8066]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:29:24 np0005603492.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 05:29:24 np0005603492.novalocal systemd[1]: session-4.scope: Consumed 40.368s CPU time.
Jan 31 05:29:24 np0005603492.novalocal systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Jan 31 05:29:24 np0005603492.novalocal systemd-logind[797]: Removed session 4.
Jan 31 05:29:36 np0005603492.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:29:36 np0005603492.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:29:36 np0005603492.novalocal systemd[1]: man-db-cache-update.service: Consumed 44.404s CPU time.
Jan 31 05:29:36 np0005603492.novalocal systemd[1]: run-rc34aceb5935e4f9e87bbac9118e49ecd.service: Deactivated successfully.
Jan 31 05:29:47 np0005603492.novalocal sshd-session[29636]: Connection closed by 38.102.83.111 port 40910 [preauth]
Jan 31 05:29:47 np0005603492.novalocal sshd-session[29637]: Unable to negotiate with 38.102.83.111 port 40928: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 05:29:47 np0005603492.novalocal sshd-session[29638]: Unable to negotiate with 38.102.83.111 port 40932: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 05:29:47 np0005603492.novalocal sshd-session[29640]: Connection closed by 38.102.83.111 port 40900 [preauth]
Jan 31 05:29:47 np0005603492.novalocal sshd-session[29642]: Unable to negotiate with 38.102.83.111 port 40916: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 05:29:52 np0005603492.novalocal sshd-session[29646]: Accepted publickey for zuul from 38.102.83.114 port 46300 ssh2: RSA SHA256:nLI9W8FlAkHSY0pJrzeKIqjEMoolvwyb6dlyVD5ZrF8
Jan 31 05:29:52 np0005603492.novalocal systemd-logind[797]: New session 5 of user zuul.
Jan 31 05:29:52 np0005603492.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 05:29:52 np0005603492.novalocal sshd-session[29646]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:29:52 np0005603492.novalocal python3[29673]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBC/ROvohgeoMLZAXvCcOq02FNXdwVAAXDB0MCruPmVrremWVYiT2rl0iOLoro8xgbQe4AbxQS7OwpNogBKhqk= zuul@np0005603491.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:29:52 np0005603492.novalocal sudo[29697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przpmmikqhppfomkgrqirzjojllimcxc ; /usr/bin/python3'
Jan 31 05:29:52 np0005603492.novalocal sudo[29697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:52 np0005603492.novalocal python3[29699]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBC/ROvohgeoMLZAXvCcOq02FNXdwVAAXDB0MCruPmVrremWVYiT2rl0iOLoro8xgbQe4AbxQS7OwpNogBKhqk= zuul@np0005603491.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:29:52 np0005603492.novalocal sudo[29697]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:53 np0005603492.novalocal sudo[29723]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvtnxoykggszzbeeneftrrovukuqkafh ; /usr/bin/python3'
Jan 31 05:29:53 np0005603492.novalocal sudo[29723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:53 np0005603492.novalocal python3[29725]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603492.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 05:29:53 np0005603492.novalocal useradd[29727]: new group: name=cloud-admin, GID=1002
Jan 31 05:29:53 np0005603492.novalocal useradd[29727]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 05:29:53 np0005603492.novalocal sudo[29723]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:54 np0005603492.novalocal sudo[29757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krlxcfstoiexcunffxrknxwpeevopfqf ; /usr/bin/python3'
Jan 31 05:29:54 np0005603492.novalocal sudo[29757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:54 np0005603492.novalocal python3[29759]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEBC/ROvohgeoMLZAXvCcOq02FNXdwVAAXDB0MCruPmVrremWVYiT2rl0iOLoro8xgbQe4AbxQS7OwpNogBKhqk= zuul@np0005603491.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 05:29:54 np0005603492.novalocal sudo[29757]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:54 np0005603492.novalocal sshd-session[29760]: banner exchange: Connection from 64.62.156.80 port 5082: invalid format
Jan 31 05:29:54 np0005603492.novalocal sudo[29836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzxxjrpngpclemuvfjetttnltpzqhshw ; /usr/bin/python3'
Jan 31 05:29:54 np0005603492.novalocal sudo[29836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:54 np0005603492.novalocal python3[29838]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:29:54 np0005603492.novalocal sudo[29836]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:54 np0005603492.novalocal sudo[29909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awogtpcgmhwjjrrljxizayalizjlplgw ; /usr/bin/python3'
Jan 31 05:29:54 np0005603492.novalocal sudo[29909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:54 np0005603492.novalocal python3[29911]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769837394.3342721-135-280397558929285/source _original_basename=tmp0tasd_xw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:29:55 np0005603492.novalocal sudo[29909]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:55 np0005603492.novalocal sudo[29959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzhbizwlepplvakdrbsvlabqwbfzckap ; /usr/bin/python3'
Jan 31 05:29:55 np0005603492.novalocal sudo[29959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:29:55 np0005603492.novalocal python3[29961]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 05:29:55 np0005603492.novalocal systemd[1]: Starting Hostname Service...
Jan 31 05:29:55 np0005603492.novalocal systemd[1]: Started Hostname Service.
Jan 31 05:29:55 np0005603492.novalocal systemd-hostnamed[29965]: Changed pretty hostname to 'compute-0'
Jan 31 05:29:55 compute-0 systemd-hostnamed[29965]: Hostname set to <compute-0> (static)
Jan 31 05:29:55 compute-0 NetworkManager[7189]: <info>  [1769837395.8475] hostname: static hostname changed from "np0005603492.novalocal" to "compute-0"
Jan 31 05:29:55 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:29:55 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:29:55 compute-0 sudo[29959]: pam_unix(sudo:session): session closed for user root
Jan 31 05:29:56 compute-0 sshd-session[29649]: Connection closed by 38.102.83.114 port 46300
Jan 31 05:29:56 compute-0 sshd-session[29646]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:29:56 compute-0 systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Jan 31 05:29:56 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 05:29:56 compute-0 systemd[1]: session-5.scope: Consumed 1.960s CPU time.
Jan 31 05:29:56 compute-0 systemd-logind[797]: Removed session 5.
Jan 31 05:30:05 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:30:25 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 05:32:07 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 05:32:07 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 05:32:07 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 05:32:07 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 05:33:48 compute-0 sshd-session[29986]: Accepted publickey for zuul from 38.102.83.111 port 33200 ssh2: RSA SHA256:nLI9W8FlAkHSY0pJrzeKIqjEMoolvwyb6dlyVD5ZrF8
Jan 31 05:33:48 compute-0 systemd-logind[797]: New session 6 of user zuul.
Jan 31 05:33:48 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 31 05:33:48 compute-0 sshd-session[29986]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:33:49 compute-0 python3[30062]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:33:50 compute-0 sudo[30176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raxopbteiglcbopuqbrgbxlirylgkgit ; /usr/bin/python3'
Jan 31 05:33:50 compute-0 sudo[30176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:50 compute-0 python3[30178]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:50 compute-0 sudo[30176]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:51 compute-0 sudo[30249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-misvbybzsqgvmhbgfpepjfbcmrdepnmv ; /usr/bin/python3'
Jan 31 05:33:51 compute-0 sudo[30249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:51 compute-0 python3[30251]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:51 compute-0 sudo[30249]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:51 compute-0 sudo[30275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nskccxjgpvkkrmcdjnglpggdgygaurzh ; /usr/bin/python3'
Jan 31 05:33:51 compute-0 sudo[30275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:51 compute-0 python3[30277]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:51 compute-0 sudo[30275]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:51 compute-0 sudo[30348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slicsiabqsxvqyfixfnhcmzovtaolhca ; /usr/bin/python3'
Jan 31 05:33:51 compute-0 sudo[30348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:52 compute-0 python3[30350]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:52 compute-0 sudo[30348]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:52 compute-0 sudo[30374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhbpulwjmmyqozjtfmnqsrgadseyjxsj ; /usr/bin/python3'
Jan 31 05:33:52 compute-0 sudo[30374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:52 compute-0 python3[30376]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:52 compute-0 sudo[30374]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:52 compute-0 sudo[30447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kathdiiamyvjlcyzpgcybnvrdwzclsri ; /usr/bin/python3'
Jan 31 05:33:52 compute-0 sudo[30447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:52 compute-0 python3[30449]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:52 compute-0 sudo[30447]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:52 compute-0 sudo[30473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbhgprqtwyjjfutlufztgbuovodpwosj ; /usr/bin/python3'
Jan 31 05:33:52 compute-0 sudo[30473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:53 compute-0 python3[30475]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:53 compute-0 sudo[30473]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:53 compute-0 sudo[30546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnfvkdjluzenownhscjsrqxbjmzrkjno ; /usr/bin/python3'
Jan 31 05:33:53 compute-0 sudo[30546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:53 compute-0 python3[30548]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:53 compute-0 sudo[30546]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:53 compute-0 sudo[30572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfuzryjrziipuindjyiotqvlggiykmj ; /usr/bin/python3'
Jan 31 05:33:53 compute-0 sudo[30572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:53 compute-0 python3[30574]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:53 compute-0 sudo[30572]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:54 compute-0 sudo[30645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dirhwunclcatwqdwujpcxeaesfhhtvdi ; /usr/bin/python3'
Jan 31 05:33:54 compute-0 sudo[30645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:54 compute-0 python3[30647]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:54 compute-0 sudo[30645]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:54 compute-0 sudo[30671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpslkdkaewrfvvboaqkusjfxygtawabg ; /usr/bin/python3'
Jan 31 05:33:54 compute-0 sudo[30671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:54 compute-0 python3[30673]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:54 compute-0 sudo[30671]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:54 compute-0 sudo[30744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szuxvlzatlbejpsejjfrbfeklsdaiyaz ; /usr/bin/python3'
Jan 31 05:33:54 compute-0 sudo[30744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:55 compute-0 python3[30746]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:55 compute-0 sudo[30744]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:55 compute-0 sudo[30770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjasqlayaesjcfpxtxjbnhnjtaruicnq ; /usr/bin/python3'
Jan 31 05:33:55 compute-0 sudo[30770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:55 compute-0 python3[30772]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:33:55 compute-0 sudo[30770]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:55 compute-0 sudo[30843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivlcnegruxgfncoesknwdkkfnplvbjww ; /usr/bin/python3'
Jan 31 05:33:55 compute-0 sudo[30843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:33:55 compute-0 python3[30845]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769837630.4809806-33603-167972240282427/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:33:55 compute-0 sudo[30843]: pam_unix(sudo:session): session closed for user root
Jan 31 05:33:57 compute-0 sshd-session[30870]: Unable to negotiate with 192.168.122.11 port 57378: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 05:33:57 compute-0 sshd-session[30872]: Connection closed by 192.168.122.11 port 57364 [preauth]
Jan 31 05:33:57 compute-0 sshd-session[30871]: Unable to negotiate with 192.168.122.11 port 57384: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 05:33:57 compute-0 sshd-session[30873]: Unable to negotiate with 192.168.122.11 port 57390: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 05:33:57 compute-0 sshd-session[30874]: Connection closed by 192.168.122.11 port 57370 [preauth]
Jan 31 05:34:06 compute-0 python3[30903]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:35:16 compute-0 sshd-session[30907]: Invalid user  from 14.103.183.21 port 48242
Jan 31 05:35:23 compute-0 sshd-session[30907]: Connection closed by invalid user  14.103.183.21 port 48242 [preauth]
Jan 31 05:39:05 compute-0 sshd-session[29989]: Received disconnect from 38.102.83.111 port 33200:11: disconnected by user
Jan 31 05:39:05 compute-0 sshd-session[29989]: Disconnected from user zuul 38.102.83.111 port 33200
Jan 31 05:39:05 compute-0 sshd-session[29986]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:39:05 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 05:39:05 compute-0 systemd[1]: session-6.scope: Consumed 4.766s CPU time.
Jan 31 05:39:05 compute-0 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Jan 31 05:39:05 compute-0 systemd-logind[797]: Removed session 6.
Jan 31 05:45:27 compute-0 sshd-session[30912]: Accepted publickey for zuul from 192.168.122.30 port 42478 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:45:27 compute-0 systemd-logind[797]: New session 7 of user zuul.
Jan 31 05:45:27 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 31 05:45:27 compute-0 sshd-session[30912]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:45:28 compute-0 python3.9[31065]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:45:29 compute-0 sudo[31244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icyvruncfvfhynppredrpfncsrmcromu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838329.1971138-27-229145953545777/AnsiballZ_command.py'
Jan 31 05:45:29 compute-0 sudo[31244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:29 compute-0 python3.9[31246]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:45:36 compute-0 sudo[31244]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:36 compute-0 sshd-session[30915]: Connection closed by 192.168.122.30 port 42478
Jan 31 05:45:36 compute-0 sshd-session[30912]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:45:36 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 05:45:36 compute-0 systemd[1]: session-7.scope: Consumed 7.180s CPU time.
Jan 31 05:45:36 compute-0 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Jan 31 05:45:36 compute-0 systemd-logind[797]: Removed session 7.
Jan 31 05:45:52 compute-0 sshd-session[31304]: Accepted publickey for zuul from 192.168.122.30 port 39672 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:45:52 compute-0 systemd-logind[797]: New session 8 of user zuul.
Jan 31 05:45:52 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 31 05:45:52 compute-0 sshd-session[31304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:45:53 compute-0 python3.9[31457]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 05:45:54 compute-0 python3.9[31631]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:45:55 compute-0 sudo[31781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgoqbbiabpowrvbqamzixlzmpsarzsou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838354.730031-40-88627616385448/AnsiballZ_command.py'
Jan 31 05:45:55 compute-0 sudo[31781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:55 compute-0 python3.9[31783]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:45:55 compute-0 sudo[31781]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:56 compute-0 sudo[31934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqtrizmxredahwpvhmtdwvphttyqmai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838355.6581311-52-129397005213162/AnsiballZ_stat.py'
Jan 31 05:45:56 compute-0 sudo[31934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:56 compute-0 python3.9[31936]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:45:56 compute-0 sudo[31934]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:56 compute-0 sudo[32086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quatnapfohjtymfstuocwxndcsgjuuiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838356.4577193-60-45130349822393/AnsiballZ_file.py'
Jan 31 05:45:56 compute-0 sudo[32086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:57 compute-0 python3.9[32088]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:45:57 compute-0 sudo[32086]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:57 compute-0 sudo[32238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxuepzilatjbpfncrxrjzobfknlmmrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838357.1813142-68-231321495481949/AnsiballZ_stat.py'
Jan 31 05:45:57 compute-0 sudo[32238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:57 compute-0 python3.9[32240]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:45:57 compute-0 sudo[32238]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:58 compute-0 sudo[32361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlmxwjgtjetljloqdunipdidwweqljyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838357.1813142-68-231321495481949/AnsiballZ_copy.py'
Jan 31 05:45:58 compute-0 sudo[32361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:58 compute-0 python3.9[32363]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838357.1813142-68-231321495481949/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:45:58 compute-0 sudo[32361]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:58 compute-0 sudo[32513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waxiyycbjeoujpmxsvjztbsdqnwupbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838358.53606-83-44912163519055/AnsiballZ_setup.py'
Jan 31 05:45:58 compute-0 sudo[32513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:59 compute-0 python3.9[32515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:45:59 compute-0 sudo[32513]: pam_unix(sudo:session): session closed for user root
Jan 31 05:45:59 compute-0 sudo[32670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqwzgdgczdvdrdkynwfxuevmzwhwjxqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838359.373278-91-205193908505004/AnsiballZ_file.py'
Jan 31 05:45:59 compute-0 sudo[32670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:45:59 compute-0 python3.9[32672]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:45:59 compute-0 sudo[32670]: pam_unix(sudo:session): session closed for user root
Jan 31 05:46:00 compute-0 sudo[32822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdvhyasghaajqvhrmjhjwbnttysuxfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838360.0196705-100-41329146373157/AnsiballZ_file.py'
Jan 31 05:46:00 compute-0 sudo[32822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:46:00 compute-0 python3.9[32824]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:46:00 compute-0 sudo[32822]: pam_unix(sudo:session): session closed for user root
Jan 31 05:46:01 compute-0 python3.9[32974]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:46:04 compute-0 python3.9[33227]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:46:05 compute-0 python3.9[33377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:46:06 compute-0 python3.9[33531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:46:07 compute-0 sudo[33687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwsmwizjpztyapntlnobmaqectfkhwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838367.0483904-148-166810797838169/AnsiballZ_setup.py'
Jan 31 05:46:07 compute-0 sudo[33687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:46:07 compute-0 python3.9[33689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:46:07 compute-0 sudo[33687]: pam_unix(sudo:session): session closed for user root
Jan 31 05:46:08 compute-0 sudo[33771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uussfwwisepvbmfplcbdfqcavlazmbhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838367.0483904-148-166810797838169/AnsiballZ_dnf.py'
Jan 31 05:46:08 compute-0 sudo[33771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:46:08 compute-0 python3.9[33773]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:46:49 compute-0 systemd[1]: Reloading.
Jan 31 05:46:49 compute-0 systemd-rc-local-generator[33970]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:46:49 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 05:46:49 compute-0 systemd[1]: Reloading.
Jan 31 05:46:49 compute-0 systemd-rc-local-generator[34013]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:46:49 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 05:46:49 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 05:46:49 compute-0 systemd[1]: Reloading.
Jan 31 05:46:49 compute-0 systemd-rc-local-generator[34058]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:46:49 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 05:46:50 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 05:46:50 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 05:46:50 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 05:47:43 compute-0 kernel: SELinux:  Converting 2726 SID table entries...
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:47:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:47:44 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 05:47:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:47:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:47:44 compute-0 systemd[1]: Reloading.
Jan 31 05:47:44 compute-0 systemd-rc-local-generator[34416]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:47:44 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:47:44 compute-0 sudo[33771]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:47:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:47:45 compute-0 systemd[1]: run-r47ac234bad294e30a06c39d6261d84e7.service: Deactivated successfully.
Jan 31 05:47:45 compute-0 sudo[35329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnelvmptstqrqmmpncnzyamhbbhjvrpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838465.1286566-160-237725666716673/AnsiballZ_command.py'
Jan 31 05:47:45 compute-0 sudo[35329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:45 compute-0 python3.9[35331]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:47:46 compute-0 sudo[35329]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:47 compute-0 sudo[35610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejsptnmrpgibrlagtjbibcgwftgrkwrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838466.6426976-168-9389865783498/AnsiballZ_selinux.py'
Jan 31 05:47:47 compute-0 sudo[35610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:47 compute-0 python3.9[35612]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 05:47:47 compute-0 sudo[35610]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:48 compute-0 sudo[35762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzzsdvlhabywflqgbthcacvppufonghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838468.0725806-179-39513615368602/AnsiballZ_command.py'
Jan 31 05:47:48 compute-0 sudo[35762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:48 compute-0 python3.9[35764]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 05:47:49 compute-0 sudo[35762]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:49 compute-0 sudo[35915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myrtlnsjmrzcdincvtecplghuolbylpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838469.2677383-187-180121609527728/AnsiballZ_file.py'
Jan 31 05:47:49 compute-0 sudo[35915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:50 compute-0 python3.9[35917]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:47:50 compute-0 sudo[35915]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:50 compute-0 sudo[36067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oetwsspfsjkdlybzjqyvyxwuovvgbeyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838470.4931417-195-130100547075637/AnsiballZ_mount.py'
Jan 31 05:47:50 compute-0 sudo[36067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:51 compute-0 python3.9[36069]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 05:47:51 compute-0 sudo[36067]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:52 compute-0 sudo[36219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycvljtfdajwknoqcawqhheqfglzfalb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838472.1933663-223-258313463361764/AnsiballZ_file.py'
Jan 31 05:47:52 compute-0 sudo[36219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:52 compute-0 python3.9[36221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:47:52 compute-0 sudo[36219]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:53 compute-0 sudo[36371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlxmsnkwmvhodsniftynxbrlhwuxhcki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838472.7861776-231-69354005558897/AnsiballZ_stat.py'
Jan 31 05:47:53 compute-0 sudo[36371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:53 compute-0 python3.9[36373]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:47:53 compute-0 sudo[36371]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:53 compute-0 sudo[36494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zngupmzchunedadwpjfmhxrqsqvzarii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838472.7861776-231-69354005558897/AnsiballZ_copy.py'
Jan 31 05:47:53 compute-0 sudo[36494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:53 compute-0 python3.9[36496]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838472.7861776-231-69354005558897/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:47:53 compute-0 sudo[36494]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:54 compute-0 sudo[36646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqmkrdqwjfhfsmyoizcenvyjfyhemkwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838474.21877-255-153606497421312/AnsiballZ_stat.py'
Jan 31 05:47:54 compute-0 sudo[36646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:57 compute-0 python3.9[36648]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:47:57 compute-0 sudo[36646]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:57 compute-0 sudo[36798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbiypwmbsxpvmntvwmfyqejssogdnukt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838477.3111005-263-54159024568144/AnsiballZ_command.py'
Jan 31 05:47:57 compute-0 sudo[36798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:57 compute-0 python3.9[36800]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:47:57 compute-0 sudo[36798]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:58 compute-0 sudo[36951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llattkibocwshlqckxctyanbmhedxmkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838477.9655137-271-277988087955987/AnsiballZ_file.py'
Jan 31 05:47:58 compute-0 sudo[36951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:58 compute-0 python3.9[36953]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:47:58 compute-0 sudo[36951]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:59 compute-0 sudo[37103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynpskgrdgpuubmosefweadapfumrynn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838478.7532332-282-67508781594029/AnsiballZ_getent.py'
Jan 31 05:47:59 compute-0 sudo[37103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:47:59 compute-0 python3.9[37105]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 05:47:59 compute-0 sudo[37103]: pam_unix(sudo:session): session closed for user root
Jan 31 05:47:59 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:48:00 compute-0 sudo[37257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vizdasddipldwdbwuzimxtofjptesvcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838479.6277459-290-40244627399083/AnsiballZ_group.py'
Jan 31 05:48:00 compute-0 sudo[37257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:00 compute-0 python3.9[37259]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:48:00 compute-0 groupadd[37261]: group added to /etc/group: name=qemu, GID=107
Jan 31 05:48:00 compute-0 groupadd[37261]: group added to /etc/gshadow: name=qemu
Jan 31 05:48:00 compute-0 groupadd[37261]: new group: name=qemu, GID=107
Jan 31 05:48:00 compute-0 sudo[37257]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:01 compute-0 sudo[37416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexnjtakquizwfducnctpmpgyhsmmces ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838480.8336694-298-177782330436216/AnsiballZ_user.py'
Jan 31 05:48:01 compute-0 sudo[37416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:01 compute-0 python3.9[37418]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 05:48:01 compute-0 useradd[37420]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 05:48:01 compute-0 sudo[37416]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:01 compute-0 sudo[37576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wunmgfkrxzeuckxowuyvkhvicdpanfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838481.63916-306-265526229133790/AnsiballZ_getent.py'
Jan 31 05:48:01 compute-0 sudo[37576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:02 compute-0 python3.9[37578]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 05:48:02 compute-0 sudo[37576]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:02 compute-0 sudo[37729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptdlfdfwvcyeyledvtiifmipeybfrgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838482.2906873-314-33876066434858/AnsiballZ_group.py'
Jan 31 05:48:02 compute-0 sudo[37729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:02 compute-0 python3.9[37731]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:48:02 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 05:48:02 compute-0 groupadd[37733]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 05:48:02 compute-0 groupadd[37733]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 05:48:02 compute-0 groupadd[37733]: new group: name=hugetlbfs, GID=42477
Jan 31 05:48:02 compute-0 sudo[37729]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:03 compute-0 dnf[37732]: Failed determining last makecache time.
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-barbican-42b4c41831408a8e323 128 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-glean-642fffe0203a8ffcc2443db52 194 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-cinder-1c00d6490d88e436f26ef 195 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-stevedore-c4acc5639fd2329372142 182 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-cloudkitty-tests-tempest-783703 193 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-diskimage-builder-61b717cc45660834fe9a 201 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-nova-eaa65f0b85123a4ee343246 184 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-designate-tests-tempest-347fdbc 192 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-glance-1fd12c29b339f30fe823e 202 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 191 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-manila-d783d10e75495b73866db 183 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-neutron-95cadbd379667c8520c8 202 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-octavia-5975097dd4b021385178 195 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-watcher-c014f81a8647287f6dcc 193 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-tcib-78032d201b02cee27e8e644c61 190 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 186 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-python-tempestconf-8515371b7cceebd4282 194 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 dnf[37732]: delorean-openstack-heat-ui-013accbfd179753bc3f0 185 kB/s | 3.0 kB     00:00
Jan 31 05:48:03 compute-0 sudo[37907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumkvmhbiaaknsflckycupikamcecdqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838483.1980662-323-59794995422696/AnsiballZ_file.py'
Jan 31 05:48:03 compute-0 sudo[37907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:03 compute-0 dnf[37732]: CentOS Stream 9 - BaseOS                         52 kB/s | 6.1 kB     00:00
Jan 31 05:48:03 compute-0 python3.9[37910]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 05:48:03 compute-0 sudo[37907]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:03 compute-0 dnf[37732]: CentOS Stream 9 - AppStream                      59 kB/s | 6.5 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: CentOS Stream 9 - CRB                            45 kB/s | 6.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: CentOS Stream 9 - Extras packages                70 kB/s | 7.3 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: dlrn-antelope-testing                           103 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: dlrn-antelope-build-deps                        111 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: centos9-rabbitmq                                 88 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 sudo[38069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdfpmrquxwifqirrqjufrolmhhjzxbod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838484.0176282-334-123943955005361/AnsiballZ_dnf.py'
Jan 31 05:48:04 compute-0 sudo[38069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:04 compute-0 dnf[37732]: centos9-storage                                  52 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: centos9-opstools                                 93 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: NFV SIG OpenvSwitch                              96 kB/s | 3.0 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: repo-setup-centos-appstream                     131 kB/s | 4.4 kB     00:00
Jan 31 05:48:04 compute-0 python3.9[38071]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:48:04 compute-0 dnf[37732]: repo-setup-centos-baseos                        178 kB/s | 3.9 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: repo-setup-centos-highavailability              182 kB/s | 3.9 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: repo-setup-centos-powertools                    228 kB/s | 4.3 kB     00:00
Jan 31 05:48:04 compute-0 dnf[37732]: Extra Packages for Enterprise Linux 9 - x86_64  257 kB/s |  31 kB     00:00
Jan 31 05:48:05 compute-0 dnf[37732]: Metadata cache created.
Jan 31 05:48:05 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 05:48:05 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 05:48:05 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.737s CPU time.
Jan 31 05:48:08 compute-0 sudo[38069]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:08 compute-0 sudo[38235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpddxdzqetyqfdbszqiwcsntxbvhwxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838488.7253115-342-237426992555847/AnsiballZ_file.py'
Jan 31 05:48:08 compute-0 sudo[38235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:09 compute-0 python3.9[38237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:48:09 compute-0 sudo[38235]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:09 compute-0 sudo[38387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakcskfrnkhsccwkobrscebijwqampmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838489.3904803-350-254121686342597/AnsiballZ_stat.py'
Jan 31 05:48:09 compute-0 sudo[38387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:09 compute-0 python3.9[38389]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:48:09 compute-0 sudo[38387]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:10 compute-0 sudo[38510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtjkdatkksvxkzjxtrwxgetzjabxzhqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838489.3904803-350-254121686342597/AnsiballZ_copy.py'
Jan 31 05:48:10 compute-0 sudo[38510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:10 compute-0 python3.9[38512]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769838489.3904803-350-254121686342597/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:48:10 compute-0 sudo[38510]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:11 compute-0 sudo[38662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzjrdlhjhfanvlydeanfcsriaunjhyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838490.7630942-365-248418564243439/AnsiballZ_systemd.py'
Jan 31 05:48:11 compute-0 sudo[38662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:11 compute-0 python3.9[38664]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:48:11 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 05:48:11 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 05:48:11 compute-0 kernel: Bridge firewalling registered
Jan 31 05:48:11 compute-0 systemd-modules-load[38668]: Inserted module 'br_netfilter'
Jan 31 05:48:11 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 05:48:11 compute-0 sudo[38662]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:12 compute-0 sudo[38822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzowvbsqowpgpoptrhqxwfrhdlsdwik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838491.9367719-373-103826952285770/AnsiballZ_stat.py'
Jan 31 05:48:12 compute-0 sudo[38822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:12 compute-0 python3.9[38824]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:48:12 compute-0 sudo[38822]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:12 compute-0 sudo[38945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orrgbdugdewhjndnfvahunsvjvwvgthl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838491.9367719-373-103826952285770/AnsiballZ_copy.py'
Jan 31 05:48:12 compute-0 sudo[38945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:12 compute-0 python3.9[38947]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769838491.9367719-373-103826952285770/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:48:12 compute-0 sudo[38945]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:13 compute-0 sudo[39097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vueskcvqpnvfzgasgpjyzatwsohqhzwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838493.2786434-391-182124233227027/AnsiballZ_dnf.py'
Jan 31 05:48:13 compute-0 sudo[39097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:13 compute-0 python3.9[39099]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:48:18 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 05:48:18 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 05:48:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:48:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:48:19 compute-0 systemd[1]: Reloading.
Jan 31 05:48:19 compute-0 systemd-rc-local-generator[39161]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:48:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:48:21 compute-0 sudo[39097]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:22 compute-0 python3.9[41333]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:48:22 compute-0 python3.9[42441]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 05:48:23 compute-0 python3.9[43154]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:48:23 compute-0 sudo[43309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfgtwhvuwyyvdlnbxghrwolgewkqbfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838503.7611835-430-251474786555085/AnsiballZ_command.py'
Jan 31 05:48:23 compute-0 sudo[43309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:24 compute-0 python3.9[43311]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:48:24 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 05:48:24 compute-0 systemd[1]: Starting Authorization Manager...
Jan 31 05:48:24 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 05:48:24 compute-0 polkitd[43528]: Started polkitd version 0.117
Jan 31 05:48:24 compute-0 polkitd[43528]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 05:48:24 compute-0 polkitd[43528]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 05:48:24 compute-0 polkitd[43528]: Finished loading, compiling and executing 2 rules
Jan 31 05:48:24 compute-0 systemd[1]: Started Authorization Manager.
Jan 31 05:48:24 compute-0 polkitd[43528]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 05:48:24 compute-0 sudo[43309]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:48:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:48:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.425s CPU time.
Jan 31 05:48:25 compute-0 systemd[1]: run-r18ba5903b82f485790fc08e8427e4372.service: Deactivated successfully.
Jan 31 05:48:25 compute-0 sudo[43697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckkueygoypqqdvdirfsiytvtlwprmzsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838505.1735187-439-217926330235354/AnsiballZ_systemd.py'
Jan 31 05:48:25 compute-0 sudo[43697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:25 compute-0 python3.9[43699]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:48:25 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 05:48:25 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 05:48:25 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 05:48:25 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 05:48:26 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 05:48:26 compute-0 sudo[43697]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:26 compute-0 python3.9[43860]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 05:48:28 compute-0 sudo[44010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythghgqvqzxbjuzlfbrwqcqinytmsicd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838508.6460383-496-2351904741162/AnsiballZ_systemd.py'
Jan 31 05:48:28 compute-0 sudo[44010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:29 compute-0 python3.9[44012]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:48:29 compute-0 systemd[1]: Reloading.
Jan 31 05:48:29 compute-0 systemd-rc-local-generator[44040]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:48:29 compute-0 sudo[44010]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:29 compute-0 sudo[44200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hajwcqkqiavdenabvstrtelfyunnwbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838509.5642698-496-176233192857434/AnsiballZ_systemd.py'
Jan 31 05:48:29 compute-0 sudo[44200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:30 compute-0 python3.9[44202]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:48:30 compute-0 systemd[1]: Reloading.
Jan 31 05:48:30 compute-0 systemd-rc-local-generator[44229]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:48:30 compute-0 sudo[44200]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:30 compute-0 sudo[44390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtnpwpjynejmwdrtooouyowoeqselhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838510.6775491-512-202551563851848/AnsiballZ_command.py'
Jan 31 05:48:30 compute-0 sudo[44390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:31 compute-0 python3.9[44392]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:48:31 compute-0 sudo[44390]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:31 compute-0 sudo[44543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lixbjqtyslpbsnntcwglbrfcukmmyzzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838511.340096-520-123408984259397/AnsiballZ_command.py'
Jan 31 05:48:31 compute-0 sudo[44543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:31 compute-0 python3.9[44545]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:48:31 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 05:48:31 compute-0 sudo[44543]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:32 compute-0 sudo[44696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebgtmtixfqlohjsntnkxvahyamuywdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838512.013642-528-278171576925673/AnsiballZ_command.py'
Jan 31 05:48:32 compute-0 sudo[44696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:32 compute-0 python3.9[44698]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:48:33 compute-0 sudo[44696]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:34 compute-0 sudo[44858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvnhhvioidpazaldezdgnetwmwwjocrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838514.022202-536-179406822904886/AnsiballZ_command.py'
Jan 31 05:48:34 compute-0 sudo[44858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:34 compute-0 python3.9[44860]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:48:34 compute-0 sudo[44858]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:34 compute-0 sudo[45011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpehnnkylnfbyijtzvuuqnzdddwoyasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838514.7087412-544-79666083137949/AnsiballZ_systemd.py'
Jan 31 05:48:34 compute-0 sudo[45011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:35 compute-0 python3.9[45013]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:48:35 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 05:48:35 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 05:48:35 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 05:48:35 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 31 05:48:35 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 05:48:35 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 31 05:48:35 compute-0 sudo[45011]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:35 compute-0 sshd-session[31307]: Connection closed by 192.168.122.30 port 39672
Jan 31 05:48:35 compute-0 sshd-session[31304]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:48:35 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 05:48:35 compute-0 systemd[1]: session-8.scope: Consumed 2min 1.341s CPU time.
Jan 31 05:48:35 compute-0 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Jan 31 05:48:35 compute-0 systemd-logind[797]: Removed session 8.
Jan 31 05:48:41 compute-0 sshd-session[45043]: Accepted publickey for zuul from 192.168.122.30 port 57946 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:48:41 compute-0 systemd-logind[797]: New session 9 of user zuul.
Jan 31 05:48:41 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 31 05:48:41 compute-0 sshd-session[45043]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:48:42 compute-0 python3.9[45196]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:48:43 compute-0 sudo[45350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auuiwqsvkxjmbybgxquoxvatbrkzwnqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838522.745587-31-82599521192641/AnsiballZ_getent.py'
Jan 31 05:48:43 compute-0 sudo[45350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:43 compute-0 python3.9[45352]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 05:48:43 compute-0 sudo[45350]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:43 compute-0 sudo[45503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idgkizhwsafplfxfcodvsjqatagmgwne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838523.5318842-39-190098679274875/AnsiballZ_group.py'
Jan 31 05:48:43 compute-0 sudo[45503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:44 compute-0 python3.9[45505]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:48:44 compute-0 groupadd[45506]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 05:48:44 compute-0 groupadd[45506]: group added to /etc/gshadow: name=openvswitch
Jan 31 05:48:44 compute-0 groupadd[45506]: new group: name=openvswitch, GID=42476
Jan 31 05:48:44 compute-0 sudo[45503]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:44 compute-0 sudo[45661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajemvpgckeucmgcngsnqycqjyclrmrqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838524.3494713-47-30017529163080/AnsiballZ_user.py'
Jan 31 05:48:44 compute-0 sudo[45661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:44 compute-0 python3.9[45663]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 05:48:44 compute-0 useradd[45665]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 05:48:44 compute-0 useradd[45665]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 05:48:44 compute-0 useradd[45665]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 05:48:45 compute-0 sudo[45661]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:45 compute-0 sudo[45821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tevhujrzcteizvldwgeutgpdvnjffwqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838525.2844512-57-205136764345652/AnsiballZ_setup.py'
Jan 31 05:48:45 compute-0 sudo[45821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:45 compute-0 python3.9[45823]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:48:46 compute-0 sudo[45821]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:46 compute-0 sudo[45905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xusppygpwglajlbxxufflvvdpzgtagnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838525.2844512-57-205136764345652/AnsiballZ_dnf.py'
Jan 31 05:48:46 compute-0 sudo[45905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:46 compute-0 python3.9[45907]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 05:48:48 compute-0 sudo[45905]: pam_unix(sudo:session): session closed for user root
Jan 31 05:48:49 compute-0 sudo[46068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azaegwxygnaghjtqemyvrhlivswpbfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838528.7041016-71-150398376532151/AnsiballZ_dnf.py'
Jan 31 05:48:49 compute-0 sudo[46068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:48:49 compute-0 python3.9[46070]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:48:59 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:48:59 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:48:59 compute-0 groupadd[46093]: group added to /etc/group: name=unbound, GID=994
Jan 31 05:48:59 compute-0 groupadd[46093]: group added to /etc/gshadow: name=unbound
Jan 31 05:48:59 compute-0 groupadd[46093]: new group: name=unbound, GID=994
Jan 31 05:48:59 compute-0 useradd[46100]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 05:48:59 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 05:48:59 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 05:49:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:49:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:49:00 compute-0 systemd[1]: Reloading.
Jan 31 05:49:00 compute-0 systemd-sysv-generator[46599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:49:00 compute-0 systemd-rc-local-generator[46596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:49:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:49:01 compute-0 sudo[46068]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:49:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:49:01 compute-0 systemd[1]: run-re247d2e6504d4e6793cba34f8b0e4621.service: Deactivated successfully.
Jan 31 05:49:02 compute-0 sudo[47167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rulautsyhglvjbmtvvjgkeyxrbejzbbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838541.5208662-79-152263295968789/AnsiballZ_systemd.py'
Jan 31 05:49:02 compute-0 sudo[47167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:02 compute-0 python3.9[47169]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:49:02 compute-0 systemd[1]: Reloading.
Jan 31 05:49:02 compute-0 systemd-rc-local-generator[47197]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:49:02 compute-0 systemd-sysv-generator[47203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:49:02 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 05:49:02 compute-0 chown[47211]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 05:49:02 compute-0 ovs-ctl[47216]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 05:49:02 compute-0 ovs-ctl[47216]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 05:49:02 compute-0 ovs-ctl[47216]: Starting ovsdb-server [  OK  ]
Jan 31 05:49:02 compute-0 ovs-vsctl[47265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 05:49:02 compute-0 ovs-vsctl[47284]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bf4b4a34-237c-4fe2-88ca-4e5346644b6b\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 05:49:02 compute-0 ovs-ctl[47216]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 05:49:03 compute-0 ovs-vsctl[47290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 05:49:03 compute-0 ovs-ctl[47216]: Enabling remote OVSDB managers [  OK  ]
Jan 31 05:49:03 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 05:49:03 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 05:49:03 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 05:49:03 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 05:49:03 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 05:49:03 compute-0 ovs-ctl[47335]: Inserting openvswitch module [  OK  ]
Jan 31 05:49:03 compute-0 ovs-ctl[47304]: Starting ovs-vswitchd [  OK  ]
Jan 31 05:49:03 compute-0 ovs-vsctl[47353]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 05:49:03 compute-0 ovs-ctl[47304]: Enabling remote OVSDB managers [  OK  ]
Jan 31 05:49:03 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 05:49:03 compute-0 systemd[1]: Starting Open vSwitch...
Jan 31 05:49:03 compute-0 systemd[1]: Finished Open vSwitch.
Jan 31 05:49:03 compute-0 sudo[47167]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:04 compute-0 python3.9[47504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:49:04 compute-0 sudo[47654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aflhyyczckxwvpyeyblghvdmenyhwczs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838544.4596078-97-173715141314531/AnsiballZ_sefcontext.py'
Jan 31 05:49:04 compute-0 sudo[47654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:05 compute-0 python3.9[47656]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 05:49:06 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 05:49:06 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 05:49:06 compute-0 sudo[47654]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:07 compute-0 python3.9[47811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:49:07 compute-0 sudo[47967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puyvafhndbjpxgsawbntkauohhllhagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838547.6068964-115-182425469074789/AnsiballZ_dnf.py'
Jan 31 05:49:07 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 05:49:07 compute-0 sudo[47967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:08 compute-0 python3.9[47969]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:49:09 compute-0 sudo[47967]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:09 compute-0 sudo[48120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfbxzfxlmmwulyrafbapcmowtwtscsgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838549.4157822-123-188856106261311/AnsiballZ_command.py'
Jan 31 05:49:09 compute-0 sudo[48120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:10 compute-0 python3.9[48122]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:49:10 compute-0 sudo[48120]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:11 compute-0 sudo[48407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yadnaxknzdtpxmdmgvloouzlqxuaiktm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838550.9265656-131-163673882419112/AnsiballZ_file.py'
Jan 31 05:49:11 compute-0 sudo[48407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:11 compute-0 python3.9[48409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 05:49:11 compute-0 sudo[48407]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:12 compute-0 python3.9[48559]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:49:13 compute-0 sudo[48711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdlnepdcrzuyfuduogzaicwxwuhwcij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838552.7103865-147-266154046893719/AnsiballZ_dnf.py'
Jan 31 05:49:13 compute-0 sudo[48711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:13 compute-0 python3.9[48713]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:49:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:49:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:49:14 compute-0 systemd[1]: Reloading.
Jan 31 05:49:15 compute-0 systemd-rc-local-generator[48744]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:49:15 compute-0 systemd-sysv-generator[48750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:49:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:49:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:49:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:49:15 compute-0 systemd[1]: run-r3f0fcdd7974b44408fac9ba9f57a9f09.service: Deactivated successfully.
Jan 31 05:49:15 compute-0 sudo[48711]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:15 compute-0 sudo[49028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzsemkntinwnawkssypznvimeieobjhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838555.6750584-155-36706515801425/AnsiballZ_systemd.py'
Jan 31 05:49:15 compute-0 sudo[49028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:16 compute-0 python3.9[49030]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:49:16 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 05:49:16 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 05:49:16 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 05:49:16 compute-0 systemd[1]: Stopping Network Manager...
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.1908] caught SIGTERM, shutting down normally.
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.1925] dhcp4 (eth0): canceled DHCP transaction
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.1925] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.1926] dhcp4 (eth0): state changed no lease
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.1929] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 05:49:16 compute-0 NetworkManager[7189]: <info>  [1769838556.2005] exiting (success)
Jan 31 05:49:16 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:49:16 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 05:49:16 compute-0 systemd[1]: Stopped Network Manager.
Jan 31 05:49:16 compute-0 systemd[1]: NetworkManager.service: Consumed 13.915s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Jan 31 05:49:16 compute-0 systemd[1]: Starting Network Manager...
Jan 31 05:49:16 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.2491] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:d710c23e-4c03-4b1f-8d92-73ee5945a2a2)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.2492] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.2540] manager[0x55ecb4865000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 05:49:16 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 05:49:16 compute-0 systemd[1]: Started Hostname Service.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3186] hostname: hostname: using hostnamed
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3187] hostname: static hostname changed from (none) to "compute-0"
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3190] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3194] manager[0x55ecb4865000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3194] manager[0x55ecb4865000]: rfkill: WWAN hardware radio set enabled
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3211] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3217] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3218] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3218] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3219] manager: Networking is enabled by state file
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3220] settings: Loaded settings plugin: keyfile (internal)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3223] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3242] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3248] dhcp: init: Using DHCP client 'internal'
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3251] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3254] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3258] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3264] device (lo): Activation: starting connection 'lo' (b9eb3add-3ec9-4938-9bcc-ef8bce8c9429)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3268] device (eth0): carrier: link connected
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3272] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3276] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3276] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3280] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3286] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3289] device (eth1): carrier: link connected
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3293] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3296] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20) (indicated)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3296] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3299] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3305] device (eth1): Activation: starting connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20)
Jan 31 05:49:16 compute-0 systemd[1]: Started Network Manager.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3323] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3333] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3335] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3336] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3337] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3339] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3340] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3342] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3343] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3366] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3371] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3384] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3417] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3426] dhcp4 (eth0): state changed new lease, address=38.102.83.30
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3432] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 05:49:16 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3495] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3500] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3505] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3509] device (lo): Activation: successful, device activated.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3515] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3516] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3519] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3521] device (eth1): Activation: successful, device activated.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3539] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3541] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3545] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3547] device (eth0): Activation: successful, device activated.
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3552] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 05:49:16 compute-0 NetworkManager[49039]: <info>  [1769838556.3555] manager: startup complete
Jan 31 05:49:16 compute-0 sudo[49028]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:16 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 31 05:49:16 compute-0 sudo[49254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kglibnqiczzonbwocsmxokhutdvvgmvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838556.579005-163-129275497874825/AnsiballZ_dnf.py'
Jan 31 05:49:16 compute-0 sudo[49254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:17 compute-0 python3.9[49256]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:49:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:49:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:49:21 compute-0 systemd[1]: Reloading.
Jan 31 05:49:21 compute-0 systemd-rc-local-generator[49308]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:49:21 compute-0 systemd-sysv-generator[49313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:49:21 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 05:49:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:49:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:49:22 compute-0 systemd[1]: run-r74a86964b1ad40b08cbbf4fba0e9b384.service: Deactivated successfully.
Jan 31 05:49:22 compute-0 sudo[49254]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:23 compute-0 sudo[49715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkzpmatgzjesksvfiaiqusyriqqofnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838563.167175-175-74330498138537/AnsiballZ_stat.py'
Jan 31 05:49:23 compute-0 sudo[49715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:23 compute-0 python3.9[49717]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:49:23 compute-0 sudo[49715]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:24 compute-0 sudo[49867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faruegeomfdoijjirhwglkqaqphjlgsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838563.895498-184-130578942257069/AnsiballZ_ini_file.py'
Jan 31 05:49:24 compute-0 sudo[49867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:24 compute-0 python3.9[49869]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:24 compute-0 sudo[49867]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:24 compute-0 sudo[50021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukkedphaspwdqtpwezpswlrpkwfrnkkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838564.728165-194-28609478746276/AnsiballZ_ini_file.py'
Jan 31 05:49:24 compute-0 sudo[50021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:25 compute-0 python3.9[50023]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:25 compute-0 sudo[50021]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:25 compute-0 sudo[50173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oapvdgwfrwpmfouszqtyggbrjxkudfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838565.2578816-194-264294405602588/AnsiballZ_ini_file.py'
Jan 31 05:49:25 compute-0 sudo[50173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:25 compute-0 python3.9[50175]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:25 compute-0 sudo[50173]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:26 compute-0 sudo[50325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbaoebupcqrsbvtszgxkxfszavdirems ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838565.8388402-209-236866251550065/AnsiballZ_ini_file.py'
Jan 31 05:49:26 compute-0 sudo[50325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:26 compute-0 python3.9[50327]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:26 compute-0 sudo[50325]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:26 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:49:26 compute-0 sudo[50477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfyyiciojoskvgpvvxzkeajrlyrkrqnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838566.6366484-209-124536544628404/AnsiballZ_ini_file.py'
Jan 31 05:49:26 compute-0 sudo[50477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:27 compute-0 python3.9[50479]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:27 compute-0 sudo[50477]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:27 compute-0 sudo[50629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldslasurzouzhfdxjgydjdsnepstate ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838567.237789-224-106277715785350/AnsiballZ_stat.py'
Jan 31 05:49:27 compute-0 sudo[50629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:27 compute-0 python3.9[50631]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:49:27 compute-0 sudo[50629]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:28 compute-0 sudo[50752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atheztmbfnxgzehfuqoenwhfrldnesso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838567.237789-224-106277715785350/AnsiballZ_copy.py'
Jan 31 05:49:28 compute-0 sudo[50752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:28 compute-0 python3.9[50754]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838567.237789-224-106277715785350/.source _original_basename=.lf_zz84j follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:28 compute-0 sudo[50752]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:28 compute-0 sudo[50904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwdwcslwznzjquyappxtpsxhjkukpwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838568.670182-239-221858742122501/AnsiballZ_file.py'
Jan 31 05:49:28 compute-0 sudo[50904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:29 compute-0 python3.9[50906]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:29 compute-0 sudo[50904]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:29 compute-0 sudo[51056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxumhjgoliaymkebfjphzrwhtettjzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838569.309565-247-110854365226542/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 05:49:29 compute-0 sudo[51056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:29 compute-0 python3.9[51058]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 05:49:29 compute-0 sudo[51056]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:30 compute-0 sudo[51208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvttpgnhtdxispqfmeoqqhcbhrtjzcvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838570.0818217-256-235065736052046/AnsiballZ_file.py'
Jan 31 05:49:30 compute-0 sudo[51208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:30 compute-0 python3.9[51210]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:30 compute-0 sudo[51208]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:31 compute-0 sudo[51360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqisjuqagcahbztaohkxtgxfmaeddjkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838570.8656917-266-153887737446201/AnsiballZ_stat.py'
Jan 31 05:49:31 compute-0 sudo[51360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:31 compute-0 sudo[51360]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:31 compute-0 sudo[51483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kirjewnlcujvaapsbsjftlkttdohgopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838570.8656917-266-153887737446201/AnsiballZ_copy.py'
Jan 31 05:49:31 compute-0 sudo[51483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:31 compute-0 sudo[51483]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:32 compute-0 sudo[51635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiddkoufhhegclhioseufhjkznjpxlwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838572.0923722-281-99776769453438/AnsiballZ_slurp.py'
Jan 31 05:49:32 compute-0 sudo[51635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:32 compute-0 python3.9[51637]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 05:49:32 compute-0 sudo[51635]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:33 compute-0 sudo[51810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbaanauqxgdzmjwlnvesavjajrazpypw ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838572.9103599-290-261780730048520/async_wrapper.py j87507164909 300 /home/zuul/.ansible/tmp/ansible-tmp-1769838572.9103599-290-261780730048520/AnsiballZ_edpm_os_net_config.py _'
Jan 31 05:49:33 compute-0 sudo[51810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:33 compute-0 ansible-async_wrapper.py[51812]: Invoked with j87507164909 300 /home/zuul/.ansible/tmp/ansible-tmp-1769838572.9103599-290-261780730048520/AnsiballZ_edpm_os_net_config.py _
Jan 31 05:49:33 compute-0 ansible-async_wrapper.py[51815]: Starting module and watcher
Jan 31 05:49:33 compute-0 ansible-async_wrapper.py[51815]: Start watching 51816 (300)
Jan 31 05:49:33 compute-0 ansible-async_wrapper.py[51816]: Start module (51816)
Jan 31 05:49:33 compute-0 ansible-async_wrapper.py[51812]: Return async_wrapper task started.
Jan 31 05:49:33 compute-0 sudo[51810]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:33 compute-0 python3.9[51817]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 05:49:34 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 05:49:34 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 05:49:34 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 05:49:34 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 05:49:34 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.7892] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.7912] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8648] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8652] audit: op="connection-add" uuid="5ff3bd54-6468-49a2-a8dc-6d40d875bf44" name="br-ex-br" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8677] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8679] audit: op="connection-add" uuid="baa40bf4-37f4-4637-b218-823ecbaa11a8" name="br-ex-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8702] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8705] audit: op="connection-add" uuid="dde5450f-79dd-4c7d-a545-35c0a5c97f68" name="eth1-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8727] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8730] audit: op="connection-add" uuid="028bb427-f2d4-439c-a337-cb0be64c17fa" name="vlan20-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8750] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8753] audit: op="connection-add" uuid="741a9cea-3e19-40e8-90f6-c949df847288" name="vlan21-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8775] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8778] audit: op="connection-add" uuid="d38af7c0-969e-43d4-8a00-c63dd37f9717" name="vlan22-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8798] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8801] audit: op="connection-add" uuid="a6bcd0ae-0382-4fd9-ae42-d541371975f8" name="vlan23-port" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8836] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8866] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8869] audit: op="connection-add" uuid="0f59c338-1390-40b7-9783-3f4766cb2409" name="br-ex-if" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8930] audit: op="connection-update" uuid="06ac7496-aabc-5155-9521-89759a7ade20" name="ci-private-network" args="ovs-external-ids.data,ipv6.method,ipv6.routes,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.addresses,ipv4.dns,ipv4.routing-rules,connection.timestamp,connection.port-type,connection.controller,connection.master,connection.slave-type,ovs-interface.type" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8960] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8963] audit: op="connection-add" uuid="eb25ae62-7abd-4390-9ac8-78cf0a5197df" name="vlan20-if" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8993] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.8997] audit: op="connection-add" uuid="48ce1510-8b08-4572-8e63-7dbaa279df10" name="vlan21-if" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9027] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9031] audit: op="connection-add" uuid="930bbd1e-7ec7-409d-8fb9-eeafc5820e8d" name="vlan22-if" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9062] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9066] audit: op="connection-add" uuid="07872ebd-2eb9-4b5b-873e-345f3b4e8297" name="vlan23-if" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9087] audit: op="connection-delete" uuid="62391001-882a-341e-86a2-0ed7a9c123d7" name="Wired connection 1" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9107] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9111] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9125] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9133] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (5ff3bd54-6468-49a2-a8dc-6d40d875bf44)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9134] audit: op="connection-activate" uuid="5ff3bd54-6468-49a2-a8dc-6d40d875bf44" name="br-ex-br" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9138] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9140] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9153] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9161] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (baa40bf4-37f4-4637-b218-823ecbaa11a8)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9165] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9167] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9177] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9185] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (dde5450f-79dd-4c7d-a545-35c0a5c97f68)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9189] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9190] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9201] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9212] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (028bb427-f2d4-439c-a337-cb0be64c17fa)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9215] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9217] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9228] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9237] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (741a9cea-3e19-40e8-90f6-c949df847288)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9240] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9242] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9255] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9263] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d38af7c0-969e-43d4-8a00-c63dd37f9717)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9267] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9269] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9281] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9286] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (a6bcd0ae-0382-4fd9-ae42-d541371975f8)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9287] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9290] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9293] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9301] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9302] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9306] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9311] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (0f59c338-1390-40b7-9783-3f4766cb2409)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9312] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9316] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9319] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9320] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9322] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9335] device (eth1): disconnecting for new activation request.
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9336] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9340] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9343] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9344] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9347] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9349] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9353] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9358] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (eb25ae62-7abd-4390-9ac8-78cf0a5197df)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9359] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9363] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9365] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9367] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9370] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9372] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9376] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9381] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (48ce1510-8b08-4572-8e63-7dbaa279df10)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9382] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9386] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9388] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9389] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9393] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9394] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9398] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9404] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (930bbd1e-7ec7-409d-8fb9-eeafc5820e8d)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9405] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9408] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9411] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9412] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9416] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <warn>  [1769838575.9417] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9421] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9426] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (07872ebd-2eb9-4b5b-873e-345f3b4e8297)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9427] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9431] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9434] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9435] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9437] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9452] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51818 uid=0 result="success"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9454] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9459] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9461] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9468] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9473] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9478] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9493] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9498] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9510] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 kernel: Timeout policy base is empty
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9517] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 systemd-udevd[51822]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9524] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9527] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9536] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9543] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9549] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9553] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9561] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9568] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9574] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9577] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9586] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9593] dhcp4 (eth0): canceled DHCP transaction
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9593] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9594] dhcp4 (eth0): state changed no lease
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9595] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9610] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9616] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51818 uid=0 result="fail" reason="Device is not activated"
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9622] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 05:49:35 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9632] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9642] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9647] dhcp4 (eth0): state changed new lease, address=38.102.83.30
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9655] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9722] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 05:49:35 compute-0 kernel: br-ex: entered promiscuous mode
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9965] device (eth1): Activation: starting connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20)
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9973] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9977] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9981] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9984] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 kernel: vlan22: entered promiscuous mode
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9987] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9989] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9993] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:35 compute-0 NetworkManager[49039]: <info>  [1769838575.9997] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 05:49:36 compute-0 systemd-udevd[51823]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0016] device (eth1): disconnecting for new activation request.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0019] audit: op="connection-activate" uuid="06ac7496-aabc-5155-9521-89759a7ade20" name="ci-private-network" pid=51818 uid=0 result="success"
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0030] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0036] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0044] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0048] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0054] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0058] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0063] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0068] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0074] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0079] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0085] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0089] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0094] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0099] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0126] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0130] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0153] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0164] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0490] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 05:49:36 compute-0 kernel: vlan21: entered promiscuous mode
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0502] device (eth1): Activation: starting connection 'ci-private-network' (06ac7496-aabc-5155-9521-89759a7ade20)
Jan 31 05:49:36 compute-0 kernel: vlan23: entered promiscuous mode
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0543] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0553] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0567] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0594] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 05:49:36 compute-0 kernel: vlan20: entered promiscuous mode
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0624] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0635] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51818 uid=0 result="success"
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0647] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0651] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0657] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0668] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0683] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0693] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0704] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0711] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0719] device (eth1): Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0738] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0744] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0775] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 systemd-udevd[51824]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0789] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0813] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0816] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0820] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0830] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0842] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0853] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0938] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0974] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.0997] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.1006] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 05:49:36 compute-0 NetworkManager[49039]: <info>  [1769838576.1018] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 05:49:37 compute-0 NetworkManager[49039]: <info>  [1769838577.2373] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51818 uid=0 result="success"
Jan 31 05:49:37 compute-0 sudo[52179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfiqabixtlpyxiqqkpzclswjiahzjjod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838576.8838615-290-84112375565832/AnsiballZ_async_status.py'
Jan 31 05:49:37 compute-0 sudo[52179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:37 compute-0 NetworkManager[49039]: <info>  [1769838577.4809] checkpoint[0x55ecb483b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 05:49:37 compute-0 NetworkManager[49039]: <info>  [1769838577.4813] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51818 uid=0 result="success"
Jan 31 05:49:37 compute-0 python3.9[52181]: ansible-ansible.legacy.async_status Invoked with jid=j87507164909.51812 mode=status _async_dir=/root/.ansible_async
Jan 31 05:49:37 compute-0 sudo[52179]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:37 compute-0 NetworkManager[49039]: <info>  [1769838577.9660] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51818 uid=0 result="success"
Jan 31 05:49:37 compute-0 NetworkManager[49039]: <info>  [1769838577.9678] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51818 uid=0 result="success"
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.2722] audit: op="networking-control" arg="global-dns-configuration" pid=51818 uid=0 result="success"
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.2767] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.2806] audit: op="networking-control" arg="global-dns-configuration" pid=51818 uid=0 result="success"
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.3345] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51818 uid=0 result="success"
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.4626] checkpoint[0x55ecb483ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 05:49:38 compute-0 NetworkManager[49039]: <info>  [1769838578.4630] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51818 uid=0 result="success"
Jan 31 05:49:38 compute-0 ansible-async_wrapper.py[51816]: Module complete (51816)
Jan 31 05:49:38 compute-0 ansible-async_wrapper.py[51815]: Done in kid B.
Jan 31 05:49:40 compute-0 sudo[52285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtuhmbcjtjqgignljjjrqabtoywqwexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838576.8838615-290-84112375565832/AnsiballZ_async_status.py'
Jan 31 05:49:40 compute-0 sudo[52285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:41 compute-0 python3.9[52287]: ansible-ansible.legacy.async_status Invoked with jid=j87507164909.51812 mode=status _async_dir=/root/.ansible_async
Jan 31 05:49:41 compute-0 sudo[52285]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:41 compute-0 sudo[52385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksxhgladonwbbqzlgpynxjjnxbspwfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838576.8838615-290-84112375565832/AnsiballZ_async_status.py'
Jan 31 05:49:41 compute-0 sudo[52385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:41 compute-0 python3.9[52387]: ansible-ansible.legacy.async_status Invoked with jid=j87507164909.51812 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 05:49:41 compute-0 sudo[52385]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:42 compute-0 sudo[52537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqergnjsabowudwibzuwoputuaqmgfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838581.7695894-317-84708105533955/AnsiballZ_stat.py'
Jan 31 05:49:42 compute-0 sudo[52537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:42 compute-0 python3.9[52539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:49:42 compute-0 sudo[52537]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:42 compute-0 sudo[52660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhybmiewpknpeotumnewysnkwfthdjth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838581.7695894-317-84708105533955/AnsiballZ_copy.py'
Jan 31 05:49:42 compute-0 sudo[52660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:42 compute-0 python3.9[52662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838581.7695894-317-84708105533955/.source.returncode _original_basename=.ex68io7v follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:42 compute-0 sudo[52660]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:43 compute-0 sudo[52812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpcemtgkrztoatpachyiarfvjafbeoiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838582.9197464-333-232850679395750/AnsiballZ_stat.py'
Jan 31 05:49:43 compute-0 sudo[52812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:43 compute-0 python3.9[52814]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:49:43 compute-0 sudo[52812]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:43 compute-0 sudo[52935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfiyhltfmhgtmuymmvqisyekcpgsbiej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838582.9197464-333-232850679395750/AnsiballZ_copy.py'
Jan 31 05:49:43 compute-0 sudo[52935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:43 compute-0 python3.9[52937]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838582.9197464-333-232850679395750/.source.cfg _original_basename=.ewe08pkw follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:49:43 compute-0 sudo[52935]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:44 compute-0 sudo[53088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwshmhanbzuggvadygfrelrkqioixss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838584.114634-348-265702395565501/AnsiballZ_systemd.py'
Jan 31 05:49:44 compute-0 sudo[53088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:49:44 compute-0 python3.9[53090]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:49:44 compute-0 systemd[1]: Reloading Network Manager...
Jan 31 05:49:44 compute-0 NetworkManager[49039]: <info>  [1769838584.7382] audit: op="reload" arg="0" pid=53094 uid=0 result="success"
Jan 31 05:49:44 compute-0 NetworkManager[49039]: <info>  [1769838584.7389] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 05:49:44 compute-0 systemd[1]: Reloaded Network Manager.
Jan 31 05:49:44 compute-0 sudo[53088]: pam_unix(sudo:session): session closed for user root
Jan 31 05:49:45 compute-0 sshd-session[45046]: Connection closed by 192.168.122.30 port 57946
Jan 31 05:49:45 compute-0 sshd-session[45043]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:49:45 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 05:49:45 compute-0 systemd[1]: session-9.scope: Consumed 45.024s CPU time.
Jan 31 05:49:45 compute-0 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Jan 31 05:49:45 compute-0 systemd-logind[797]: Removed session 9.
Jan 31 05:49:46 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 05:49:51 compute-0 sshd-session[53127]: Accepted publickey for zuul from 192.168.122.30 port 44676 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:49:51 compute-0 systemd-logind[797]: New session 10 of user zuul.
Jan 31 05:49:51 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 31 05:49:51 compute-0 sshd-session[53127]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:49:52 compute-0 python3.9[53280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:49:53 compute-0 python3.9[53434]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:49:54 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 05:49:54 compute-0 python3.9[53631]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:49:55 compute-0 sshd-session[53503]: Invalid user admin from 78.128.112.74 port 50684
Jan 31 05:49:55 compute-0 sshd-session[53503]: Connection closed by invalid user admin 78.128.112.74 port 50684 [preauth]
Jan 31 05:49:55 compute-0 sshd-session[53130]: Connection closed by 192.168.122.30 port 44676
Jan 31 05:49:55 compute-0 sshd-session[53127]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:49:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 05:49:55 compute-0 systemd[1]: session-10.scope: Consumed 2.171s CPU time.
Jan 31 05:49:55 compute-0 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Jan 31 05:49:55 compute-0 systemd-logind[797]: Removed session 10.
Jan 31 05:50:01 compute-0 sshd-session[53659]: Accepted publickey for zuul from 192.168.122.30 port 32818 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:50:01 compute-0 systemd-logind[797]: New session 11 of user zuul.
Jan 31 05:50:01 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 31 05:50:01 compute-0 sshd-session[53659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:50:02 compute-0 python3.9[53812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:50:03 compute-0 python3.9[53966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:50:03 compute-0 sudo[54121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sccazlzvaglipqyjerweqrnvigiyibxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838603.3840904-35-105343905311090/AnsiballZ_setup.py'
Jan 31 05:50:03 compute-0 sudo[54121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:03 compute-0 python3.9[54123]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:50:04 compute-0 sudo[54121]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:04 compute-0 sudo[54205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexlrhuepsvydgslxrevxlryfvgbnwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838603.3840904-35-105343905311090/AnsiballZ_dnf.py'
Jan 31 05:50:04 compute-0 sudo[54205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:04 compute-0 python3.9[54207]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:50:05 compute-0 sudo[54205]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:06 compute-0 sudo[54358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vennmzztxqggcnbmzwkgerekhuygxnhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838606.154133-47-42233583463716/AnsiballZ_setup.py'
Jan 31 05:50:06 compute-0 sudo[54358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:06 compute-0 python3.9[54360]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:50:06 compute-0 sudo[54358]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:07 compute-0 sudo[54554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqtdlidqjnrrxtxoehapzbatemhpykg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838607.2509177-58-257694649889434/AnsiballZ_file.py'
Jan 31 05:50:07 compute-0 sudo[54554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:07 compute-0 python3.9[54556]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:07 compute-0 sudo[54554]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:08 compute-0 sudo[54706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njcarglgcztonapmieyepqvftehnzarg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838608.075158-66-95300250927264/AnsiballZ_command.py'
Jan 31 05:50:08 compute-0 sudo[54706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:08 compute-0 python3.9[54708]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3383131097-merged.mount: Deactivated successfully.
Jan 31 05:50:08 compute-0 podman[54709]: 2026-01-31 05:50:08.787319229 +0000 UTC m=+0.065502464 system refresh
Jan 31 05:50:08 compute-0 sudo[54706]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:09 compute-0 sudo[54869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvkrcinqlskgqoesnqbwlrqpuwglsab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838609.0172713-74-65811530690670/AnsiballZ_stat.py'
Jan 31 05:50:09 compute-0 sudo[54869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:09 compute-0 python3.9[54871]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:09 compute-0 sudo[54869]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:50:10 compute-0 sudo[54992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmnikkcptlulxnxgkcsvrlofjbdbzoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838609.0172713-74-65811530690670/AnsiballZ_copy.py'
Jan 31 05:50:10 compute-0 sudo[54992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:10 compute-0 python3.9[54994]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838609.0172713-74-65811530690670/.source.json follow=False _original_basename=podman_network_config.j2 checksum=2e9767a5b36dacbc6d6821940539551786ed075b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:10 compute-0 sudo[54992]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:10 compute-0 sudo[55144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttmhnghhlyhiaaprgwfruuxabkqdbkgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838610.5843697-89-245168185097890/AnsiballZ_stat.py'
Jan 31 05:50:10 compute-0 sudo[55144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:11 compute-0 python3.9[55146]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:11 compute-0 sudo[55144]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:11 compute-0 sudo[55267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohpgfrshcnltaxfimdtjoptuswofjos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838610.5843697-89-245168185097890/AnsiballZ_copy.py'
Jan 31 05:50:11 compute-0 sudo[55267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:11 compute-0 python3.9[55269]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769838610.5843697-89-245168185097890/.source.conf follow=False _original_basename=registries.conf.j2 checksum=fd66d1bd6eb6a307cf30909e9f431d706443d492 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:11 compute-0 sudo[55267]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:12 compute-0 sudo[55419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwctprmuqyivgnbnelcdwipjjklgdml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838611.9701135-105-205063074879122/AnsiballZ_ini_file.py'
Jan 31 05:50:12 compute-0 sudo[55419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:12 compute-0 python3.9[55421]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:12 compute-0 sudo[55419]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:13 compute-0 sudo[55571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nipkaazlqqqwgnkztdneetftalwojoic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838612.7511377-105-51581417871422/AnsiballZ_ini_file.py'
Jan 31 05:50:13 compute-0 sudo[55571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:13 compute-0 python3.9[55573]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:13 compute-0 sudo[55571]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:13 compute-0 sudo[55723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkohmxvzhclbprrkcqwaogoytxkusrsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838613.4097912-105-4805373218351/AnsiballZ_ini_file.py'
Jan 31 05:50:13 compute-0 sudo[55723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:13 compute-0 python3.9[55725]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:13 compute-0 sudo[55723]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:14 compute-0 sudo[55875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iislzzusutqblwyzmdbljhejtwnkcoaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838614.038141-105-36698759591386/AnsiballZ_ini_file.py'
Jan 31 05:50:14 compute-0 sudo[55875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:14 compute-0 python3.9[55877]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:14 compute-0 sudo[55875]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:14 compute-0 sudo[56027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxmzodexukcyayudqnuafjzulfxlgjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838614.7276099-136-278057541188748/AnsiballZ_dnf.py'
Jan 31 05:50:14 compute-0 sudo[56027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:15 compute-0 python3.9[56029]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:50:16 compute-0 sudo[56027]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:17 compute-0 sudo[56180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubzkiozvrnpkghmccfzjxpelijsuyfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838616.7724721-147-42346690904272/AnsiballZ_setup.py'
Jan 31 05:50:17 compute-0 sudo[56180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:17 compute-0 python3.9[56182]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:50:17 compute-0 sudo[56180]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:17 compute-0 sudo[56334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimkrbqbshnvpgnfdgxgmmchvupgbpsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838617.592809-155-257580575489418/AnsiballZ_stat.py'
Jan 31 05:50:17 compute-0 sudo[56334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:18 compute-0 python3.9[56336]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:50:18 compute-0 sudo[56334]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:18 compute-0 sudo[56486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segtwlouivaxksaqwfagefdmoxyamfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838618.325062-164-26271957332926/AnsiballZ_stat.py'
Jan 31 05:50:18 compute-0 sudo[56486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:18 compute-0 python3.9[56488]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:50:18 compute-0 sudo[56486]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:19 compute-0 sudo[56638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnssgqckguiuetsctmyunzsnezdpoyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838619.1288354-174-11405982338244/AnsiballZ_command.py'
Jan 31 05:50:19 compute-0 sudo[56638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:19 compute-0 python3.9[56640]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:50:19 compute-0 sudo[56638]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:20 compute-0 sudo[56791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvcdkstmkkdtvdbdghbqkdwkhbiumsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838619.9501994-184-271521882232182/AnsiballZ_service_facts.py'
Jan 31 05:50:20 compute-0 sudo[56791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:20 compute-0 python3.9[56793]: ansible-service_facts Invoked
Jan 31 05:50:20 compute-0 network[56810]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:50:20 compute-0 network[56811]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:50:20 compute-0 network[56812]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:50:23 compute-0 sudo[56791]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:24 compute-0 sudo[57095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drzyxbpqdragenaytwdrsvmwikopzsmz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769838623.915908-199-83298976121042/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769838623.915908-199-83298976121042/args'
Jan 31 05:50:24 compute-0 sudo[57095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:24 compute-0 sudo[57095]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:24 compute-0 sudo[57262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyiyosvkewjbaivhygqitiqnypaaexty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838624.5273898-210-122698376346160/AnsiballZ_dnf.py'
Jan 31 05:50:24 compute-0 sudo[57262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:25 compute-0 python3.9[57264]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:50:26 compute-0 sudo[57262]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:26 compute-0 sudo[57415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzniytcthgzzrtkuvbwbvvmgmdvymxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838626.4043868-223-142910366716396/AnsiballZ_package_facts.py'
Jan 31 05:50:26 compute-0 sudo[57415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:27 compute-0 python3.9[57417]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 05:50:27 compute-0 sudo[57415]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:28 compute-0 sudo[57567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqipaelyqqapsdrmjbuxmbioliscaoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838627.9194171-233-59805041119964/AnsiballZ_stat.py'
Jan 31 05:50:28 compute-0 sudo[57567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:28 compute-0 python3.9[57569]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:28 compute-0 sudo[57567]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:28 compute-0 sudo[57692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imbenvfjlsbwfdjgdzrwrkrlrtqhyvtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838627.9194171-233-59805041119964/AnsiballZ_copy.py'
Jan 31 05:50:28 compute-0 sudo[57692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:29 compute-0 python3.9[57694]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838627.9194171-233-59805041119964/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:29 compute-0 sudo[57692]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:29 compute-0 sudo[57846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzurzuvukmbgyyppjgknangreqwggoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838629.4116383-248-94486889637742/AnsiballZ_stat.py'
Jan 31 05:50:29 compute-0 sudo[57846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:29 compute-0 python3.9[57848]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:29 compute-0 sudo[57846]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:30 compute-0 sudo[57971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkslczcjixkvgwvcypjaxcbuixtqseng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838629.4116383-248-94486889637742/AnsiballZ_copy.py'
Jan 31 05:50:30 compute-0 sudo[57971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:30 compute-0 python3.9[57973]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838629.4116383-248-94486889637742/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:30 compute-0 sudo[57971]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:31 compute-0 sudo[58125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwekiatxubpkqfxkvfnszfjlletnwdww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838630.9228477-269-39961219487298/AnsiballZ_lineinfile.py'
Jan 31 05:50:31 compute-0 sudo[58125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:31 compute-0 python3.9[58127]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:31 compute-0 sudo[58125]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:32 compute-0 sudo[58279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-somzqoxxrlnizxdfbddloonxxxdsbzvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838632.1001053-284-195882794520347/AnsiballZ_setup.py'
Jan 31 05:50:32 compute-0 sudo[58279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:32 compute-0 python3.9[58281]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:50:32 compute-0 sudo[58279]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:33 compute-0 sudo[58363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgttqzoqkroaqkygfynxsrcpmkqezfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838632.1001053-284-195882794520347/AnsiballZ_systemd.py'
Jan 31 05:50:33 compute-0 sudo[58363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:33 compute-0 python3.9[58365]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:50:34 compute-0 sudo[58363]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:35 compute-0 sudo[58517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvcxoobmrycujkokhixqtspdvpybler ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838635.3443518-300-125445441156691/AnsiballZ_setup.py'
Jan 31 05:50:35 compute-0 sudo[58517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:35 compute-0 python3.9[58519]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:50:36 compute-0 sudo[58517]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:36 compute-0 sudo[58601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcwxibeprujwtksajsybcgxgkpvpkssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838635.3443518-300-125445441156691/AnsiballZ_systemd.py'
Jan 31 05:50:36 compute-0 sudo[58601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:36 compute-0 python3.9[58603]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:50:36 compute-0 chronyd[806]: chronyd exiting
Jan 31 05:50:36 compute-0 systemd[1]: Stopping NTP client/server...
Jan 31 05:50:36 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 05:50:36 compute-0 systemd[1]: Stopped NTP client/server.
Jan 31 05:50:36 compute-0 systemd[1]: Starting NTP client/server...
Jan 31 05:50:36 compute-0 chronyd[58611]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 05:50:36 compute-0 chronyd[58611]: Frequency -28.083 +/- 0.153 ppm read from /var/lib/chrony/drift
Jan 31 05:50:36 compute-0 chronyd[58611]: Loaded seccomp filter (level 2)
Jan 31 05:50:36 compute-0 systemd[1]: Started NTP client/server.
Jan 31 05:50:36 compute-0 sudo[58601]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:37 compute-0 sshd-session[53662]: Connection closed by 192.168.122.30 port 32818
Jan 31 05:50:37 compute-0 sshd-session[53659]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:50:37 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 05:50:37 compute-0 systemd[1]: session-11.scope: Consumed 23.807s CPU time.
Jan 31 05:50:37 compute-0 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Jan 31 05:50:37 compute-0 systemd-logind[797]: Removed session 11.
Jan 31 05:50:42 compute-0 sshd-session[58637]: Accepted publickey for zuul from 192.168.122.30 port 35418 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:50:42 compute-0 systemd-logind[797]: New session 12 of user zuul.
Jan 31 05:50:42 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 31 05:50:42 compute-0 sshd-session[58637]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:50:43 compute-0 sudo[58790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxegohzrstgqdtpweboynilynwkngqky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838642.6968827-17-256158424484431/AnsiballZ_file.py'
Jan 31 05:50:43 compute-0 sudo[58790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:43 compute-0 python3.9[58792]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:43 compute-0 sudo[58790]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:43 compute-0 sudo[58942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oolpumqzjeroyeleqcyfffydudkysfsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838643.5485888-29-126385870532710/AnsiballZ_stat.py'
Jan 31 05:50:43 compute-0 sudo[58942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:44 compute-0 python3.9[58944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:44 compute-0 sudo[58942]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:44 compute-0 sudo[59065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baftnuazswflxkumphwxqlakwbtquaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838643.5485888-29-126385870532710/AnsiballZ_copy.py'
Jan 31 05:50:44 compute-0 sudo[59065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:44 compute-0 python3.9[59067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838643.5485888-29-126385870532710/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:44 compute-0 sudo[59065]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:45 compute-0 sshd-session[58640]: Connection closed by 192.168.122.30 port 35418
Jan 31 05:50:45 compute-0 sshd-session[58637]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:50:45 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 05:50:45 compute-0 systemd[1]: session-12.scope: Consumed 1.407s CPU time.
Jan 31 05:50:45 compute-0 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Jan 31 05:50:45 compute-0 systemd-logind[797]: Removed session 12.
Jan 31 05:50:50 compute-0 sshd-session[59092]: Accepted publickey for zuul from 192.168.122.30 port 56058 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:50:50 compute-0 systemd-logind[797]: New session 13 of user zuul.
Jan 31 05:50:50 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 31 05:50:50 compute-0 sshd-session[59092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:50:51 compute-0 python3.9[59245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:50:52 compute-0 sudo[59399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paywnytxlnqoafrdtbrqxqkyxdwslprq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838652.2546098-28-104934855006293/AnsiballZ_file.py'
Jan 31 05:50:52 compute-0 sudo[59399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:52 compute-0 python3.9[59401]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:52 compute-0 sudo[59399]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:53 compute-0 sudo[59574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghdumhcwzrdfjfpkyabrbsxvkupxixhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838653.0416608-36-188038710290557/AnsiballZ_stat.py'
Jan 31 05:50:53 compute-0 sudo[59574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:53 compute-0 python3.9[59576]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:53 compute-0 sudo[59574]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:54 compute-0 sudo[59697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llgxuzrdxdifzmmghleapghsoxpgoajg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838653.0416608-36-188038710290557/AnsiballZ_copy.py'
Jan 31 05:50:54 compute-0 sudo[59697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:54 compute-0 python3.9[59699]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769838653.0416608-36-188038710290557/.source.json _original_basename=.t84kok8u follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:54 compute-0 sudo[59697]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:54 compute-0 sudo[59849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebvkoibatyxrnjdnzjjpimympgxphfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838654.705982-59-114900308227076/AnsiballZ_stat.py'
Jan 31 05:50:54 compute-0 sudo[59849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:55 compute-0 python3.9[59851]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:55 compute-0 sudo[59849]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:55 compute-0 sudo[59972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgsemzanjuvavjbphcwusldopkqxclak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838654.705982-59-114900308227076/AnsiballZ_copy.py'
Jan 31 05:50:55 compute-0 sudo[59972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:55 compute-0 python3.9[59974]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838654.705982-59-114900308227076/.source _original_basename=.klatimll follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:55 compute-0 sudo[59972]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:56 compute-0 sudo[60124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sprhzcncggyoijuohdtciugttvtsndlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838655.938594-75-102997802429443/AnsiballZ_file.py'
Jan 31 05:50:56 compute-0 sudo[60124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:56 compute-0 python3.9[60126]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:56 compute-0 sudo[60124]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:56 compute-0 sudo[60276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqvguenlywzlwzmkbsnycmijbuottby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838656.573234-83-276443240776369/AnsiballZ_stat.py'
Jan 31 05:50:56 compute-0 sudo[60276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:56 compute-0 python3.9[60278]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:57 compute-0 sudo[60276]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:57 compute-0 sudo[60399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acsubnbtdcmzhmfrjupqeayrgkzrybai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838656.573234-83-276443240776369/AnsiballZ_copy.py'
Jan 31 05:50:57 compute-0 sudo[60399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:57 compute-0 python3.9[60401]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769838656.573234-83-276443240776369/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:57 compute-0 sudo[60399]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:58 compute-0 sudo[60551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yccmjmzfrplhyqionilhplfwumlsvssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838657.7150958-83-36423032414434/AnsiballZ_stat.py'
Jan 31 05:50:58 compute-0 sudo[60551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:58 compute-0 python3.9[60553]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:58 compute-0 sudo[60551]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:58 compute-0 sudo[60674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xszessnryfimhqkygttyhhqhhqxlbhfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838657.7150958-83-36423032414434/AnsiballZ_copy.py'
Jan 31 05:50:58 compute-0 sudo[60674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:58 compute-0 python3.9[60676]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769838657.7150958-83-36423032414434/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:50:58 compute-0 sudo[60674]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:59 compute-0 sudo[60826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onplctuexliqtqtnqnuxgfbyulffwcos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838658.8973815-112-250963502612496/AnsiballZ_file.py'
Jan 31 05:50:59 compute-0 sudo[60826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:59 compute-0 python3.9[60828]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:50:59 compute-0 sudo[60826]: pam_unix(sudo:session): session closed for user root
Jan 31 05:50:59 compute-0 sudo[60978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmbogjzngulsgrowxgsktjxmfefcvhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838659.512022-120-43544174003576/AnsiballZ_stat.py'
Jan 31 05:50:59 compute-0 sudo[60978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:50:59 compute-0 python3.9[60980]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:50:59 compute-0 sudo[60978]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:00 compute-0 sudo[61101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiktxvbwertacopyedeydskvpvtimpau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838659.512022-120-43544174003576/AnsiballZ_copy.py'
Jan 31 05:51:00 compute-0 sudo[61101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:00 compute-0 python3.9[61103]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838659.512022-120-43544174003576/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:00 compute-0 sudo[61101]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:00 compute-0 sudo[61253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnhoqhcgapjkqeeolgpsvmaddhbkynjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838660.6674328-135-146547828324660/AnsiballZ_stat.py'
Jan 31 05:51:00 compute-0 sudo[61253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:01 compute-0 python3.9[61255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:01 compute-0 sudo[61253]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:01 compute-0 sudo[61376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlikyejkueukervakloldjcdvzlnvcjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838660.6674328-135-146547828324660/AnsiballZ_copy.py'
Jan 31 05:51:01 compute-0 sudo[61376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:01 compute-0 python3.9[61378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838660.6674328-135-146547828324660/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:01 compute-0 sudo[61376]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:02 compute-0 sudo[61528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohnlsaqcoegqdgmqqmnkeepczeubhqre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838661.8081877-150-122289394895204/AnsiballZ_systemd.py'
Jan 31 05:51:02 compute-0 sudo[61528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:02 compute-0 python3.9[61530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:51:02 compute-0 systemd[1]: Reloading.
Jan 31 05:51:02 compute-0 systemd-rc-local-generator[61555]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:02 compute-0 systemd-sysv-generator[61560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:02 compute-0 systemd[1]: Reloading.
Jan 31 05:51:03 compute-0 systemd-rc-local-generator[61598]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:03 compute-0 systemd-sysv-generator[61601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:03 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 05:51:03 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 05:51:03 compute-0 sudo[61528]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:03 compute-0 sudo[61755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-objlzgassoahomqpbdlstpscfghbatrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838663.3317287-158-40218075237467/AnsiballZ_stat.py'
Jan 31 05:51:03 compute-0 sudo[61755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:03 compute-0 python3.9[61757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:03 compute-0 sudo[61755]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:04 compute-0 sudo[61878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrddgdpksaurwjijnuwwqsvzmxqpxlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838663.3317287-158-40218075237467/AnsiballZ_copy.py'
Jan 31 05:51:04 compute-0 sudo[61878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:04 compute-0 python3.9[61880]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838663.3317287-158-40218075237467/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:04 compute-0 sudo[61878]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:04 compute-0 sudo[62030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qttwbcxmfexbgwdgapcbwxfppolbauvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838664.5157678-173-168606439322652/AnsiballZ_stat.py'
Jan 31 05:51:04 compute-0 sudo[62030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:04 compute-0 python3.9[62032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:05 compute-0 sudo[62030]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:05 compute-0 sudo[62153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrirbhqtxruibotultexsghxkljtagwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838664.5157678-173-168606439322652/AnsiballZ_copy.py'
Jan 31 05:51:05 compute-0 sudo[62153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:05 compute-0 python3.9[62155]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838664.5157678-173-168606439322652/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:05 compute-0 sudo[62153]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:06 compute-0 sudo[62305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brledppsjbvvplvfcrvrqzuqfsztjsqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838665.7357948-188-79930963183467/AnsiballZ_systemd.py'
Jan 31 05:51:06 compute-0 sudo[62305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:06 compute-0 python3.9[62307]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:51:06 compute-0 systemd[1]: Reloading.
Jan 31 05:51:06 compute-0 systemd-sysv-generator[62336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:06 compute-0 systemd-rc-local-generator[62333]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:06 compute-0 systemd[1]: Reloading.
Jan 31 05:51:06 compute-0 systemd-sysv-generator[62374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:06 compute-0 systemd-rc-local-generator[62368]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:06 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 05:51:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 05:51:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 05:51:06 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 05:51:06 compute-0 sudo[62305]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:07 compute-0 python3.9[62534]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:51:07 compute-0 network[62551]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:51:07 compute-0 network[62552]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:51:07 compute-0 network[62553]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:51:10 compute-0 sudo[62813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbesxjaubbpvlmggeveluetootdhqbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838670.3289986-204-228955445573060/AnsiballZ_systemd.py'
Jan 31 05:51:10 compute-0 sudo[62813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:10 compute-0 python3.9[62815]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:51:10 compute-0 systemd[1]: Reloading.
Jan 31 05:51:10 compute-0 systemd-rc-local-generator[62839]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:10 compute-0 systemd-sysv-generator[62844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:11 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 05:51:11 compute-0 iptables.init[62855]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 05:51:11 compute-0 iptables.init[62855]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 05:51:11 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 05:51:11 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 05:51:11 compute-0 sudo[62813]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:11 compute-0 sudo[63049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdiqxbabzfjtznpwpyrvgioevwjvazhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838671.6001387-204-256539559181663/AnsiballZ_systemd.py'
Jan 31 05:51:11 compute-0 sudo[63049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:12 compute-0 python3.9[63051]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:51:12 compute-0 sudo[63049]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:12 compute-0 sudo[63203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxuddkzxnvtpnzcpunbbkgeikbgcwcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838672.4250634-220-165569312944330/AnsiballZ_systemd.py'
Jan 31 05:51:12 compute-0 sudo[63203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:12 compute-0 python3.9[63205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:51:13 compute-0 systemd[1]: Reloading.
Jan 31 05:51:13 compute-0 systemd-sysv-generator[63239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:51:13 compute-0 systemd-rc-local-generator[63236]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:51:13 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 31 05:51:13 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 31 05:51:13 compute-0 sudo[63203]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:14 compute-0 sudo[63396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocedkcapxjcmfajlhcrvcnewfoxleiyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838673.4331179-228-10504667020047/AnsiballZ_command.py'
Jan 31 05:51:14 compute-0 sudo[63396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:14 compute-0 python3.9[63398]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:51:14 compute-0 sudo[63396]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:14 compute-0 sudo[63549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwuudeglbbvsbigpixykxiysxvktxel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838674.6777036-242-140449643461091/AnsiballZ_stat.py'
Jan 31 05:51:14 compute-0 sudo[63549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:15 compute-0 python3.9[63551]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:15 compute-0 sudo[63549]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:15 compute-0 sudo[63674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clnuiddhertjnyssxwipwaltjbnsyyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838674.6777036-242-140449643461091/AnsiballZ_copy.py'
Jan 31 05:51:15 compute-0 sudo[63674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:15 compute-0 python3.9[63676]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838674.6777036-242-140449643461091/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:15 compute-0 sudo[63674]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:17 compute-0 sudo[63827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjudcebhjynrjihtsdsjldadjhxbmcbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838677.247465-257-253869110968408/AnsiballZ_systemd.py'
Jan 31 05:51:17 compute-0 sudo[63827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:17 compute-0 python3.9[63829]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:51:17 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 05:51:17 compute-0 sshd[1005]: Received SIGHUP; restarting.
Jan 31 05:51:17 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 05:51:17 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Jan 31 05:51:17 compute-0 sshd[1005]: Server listening on :: port 22.
Jan 31 05:51:17 compute-0 sudo[63827]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:18 compute-0 sudo[63983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmuoebubkkmvkywqsmuwqrwokqajfqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838678.0626764-265-9076540463953/AnsiballZ_file.py'
Jan 31 05:51:18 compute-0 sudo[63983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:18 compute-0 python3.9[63985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:18 compute-0 sudo[63983]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:19 compute-0 sudo[64135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trhgklpecaidjzsdoqgmmyafcermgvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838678.6964061-273-142220309675549/AnsiballZ_stat.py'
Jan 31 05:51:19 compute-0 sudo[64135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:19 compute-0 python3.9[64137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:19 compute-0 sudo[64135]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:19 compute-0 sudo[64258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycwqkzpfgnhdioqxndiorkslhuuepfpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838678.6964061-273-142220309675549/AnsiballZ_copy.py'
Jan 31 05:51:19 compute-0 sudo[64258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:19 compute-0 python3.9[64260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838678.6964061-273-142220309675549/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:19 compute-0 sudo[64258]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:20 compute-0 sudo[64410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cguuogvmfzranuxzzqzmwettuhideibn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838680.0580485-291-202031471217291/AnsiballZ_timezone.py'
Jan 31 05:51:20 compute-0 sudo[64410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:20 compute-0 python3.9[64412]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 05:51:20 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 05:51:20 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 05:51:20 compute-0 sudo[64410]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:21 compute-0 sudo[64566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijugocdxtjjyqbfbcmxfopvojcycvpao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838681.213502-300-129632443959913/AnsiballZ_file.py'
Jan 31 05:51:21 compute-0 sudo[64566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:21 compute-0 python3.9[64568]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:21 compute-0 sudo[64566]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:22 compute-0 sudo[64718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbhijdnqxiwgtmiupwltonwxkvbbnsdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838681.9111128-308-131848044798399/AnsiballZ_stat.py'
Jan 31 05:51:22 compute-0 sudo[64718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:22 compute-0 python3.9[64720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:22 compute-0 sudo[64718]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:22 compute-0 sudo[64841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezlsgodfanwlomcfcxnosllxqrjfhlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838681.9111128-308-131848044798399/AnsiballZ_copy.py'
Jan 31 05:51:22 compute-0 sudo[64841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:22 compute-0 python3.9[64843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838681.9111128-308-131848044798399/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:22 compute-0 sudo[64841]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:23 compute-0 sudo[64993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-halcyqshhsvtpuumesawcwnnfzvyawlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838683.2722898-323-121295575183935/AnsiballZ_stat.py'
Jan 31 05:51:23 compute-0 sudo[64993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:23 compute-0 python3.9[64995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:23 compute-0 sudo[64993]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:24 compute-0 sudo[65116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxonpktgyqrekdipweyebznzbhnwwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838683.2722898-323-121295575183935/AnsiballZ_copy.py'
Jan 31 05:51:24 compute-0 sudo[65116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:24 compute-0 python3.9[65118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769838683.2722898-323-121295575183935/.source.yaml _original_basename=.wz92_vmu follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:24 compute-0 sudo[65116]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:24 compute-0 sudo[65268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbsyzydtylnxyfoiegsipbsgkjnsysnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838684.5257437-338-116162346728567/AnsiballZ_stat.py'
Jan 31 05:51:24 compute-0 sudo[65268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:25 compute-0 python3.9[65270]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:25 compute-0 sudo[65268]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:25 compute-0 sudo[65391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrpsqtpkqfzkkxwbilhckjgmbnmshzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838684.5257437-338-116162346728567/AnsiballZ_copy.py'
Jan 31 05:51:25 compute-0 sudo[65391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:25 compute-0 python3.9[65393]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838684.5257437-338-116162346728567/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:25 compute-0 sudo[65391]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:26 compute-0 sudo[65543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdcxqmyyupkptbjjmkfomqstbcqyvpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838685.7814436-353-279328486689207/AnsiballZ_command.py'
Jan 31 05:51:26 compute-0 sudo[65543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:26 compute-0 python3.9[65545]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:51:26 compute-0 sudo[65543]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:26 compute-0 sudo[65696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpgfltivoilqxoebnemvroewhnswsvqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838686.4508648-361-111492606351221/AnsiballZ_command.py'
Jan 31 05:51:26 compute-0 sudo[65696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:26 compute-0 python3.9[65698]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:51:26 compute-0 sudo[65696]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:27 compute-0 sudo[65849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lejugrksvuciqpzhclqwgdcfefoshnpm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769838687.186791-369-17961610360604/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 05:51:27 compute-0 sudo[65849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:27 compute-0 python3[65851]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 05:51:27 compute-0 sudo[65849]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:28 compute-0 sudo[66001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpalbnkyqmdhjathusvtqpseodbxtlue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838687.9904277-377-68173035871138/AnsiballZ_stat.py'
Jan 31 05:51:28 compute-0 sudo[66001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:28 compute-0 python3.9[66003]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:28 compute-0 sudo[66001]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:28 compute-0 sudo[66124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djjxmgjnfqesqvemtqfjibufqczdbwlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838687.9904277-377-68173035871138/AnsiballZ_copy.py'
Jan 31 05:51:28 compute-0 sudo[66124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:29 compute-0 python3.9[66126]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838687.9904277-377-68173035871138/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:29 compute-0 sudo[66124]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:29 compute-0 sudo[66276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqgntanfwuvbsnfsllyveghxmgzsevkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838689.2435277-392-165045504788414/AnsiballZ_stat.py'
Jan 31 05:51:29 compute-0 sudo[66276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:29 compute-0 python3.9[66278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:29 compute-0 sudo[66276]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:30 compute-0 sudo[66399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxbiakafrtcjcpxcfbbazhicuzktppl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838689.2435277-392-165045504788414/AnsiballZ_copy.py'
Jan 31 05:51:30 compute-0 sudo[66399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:30 compute-0 python3.9[66401]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838689.2435277-392-165045504788414/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:30 compute-0 sudo[66399]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:30 compute-0 sudo[66551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilrikymhwqqucyblxhulzakqqsojtggx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838690.4458752-407-245954989317538/AnsiballZ_stat.py'
Jan 31 05:51:30 compute-0 sudo[66551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:30 compute-0 python3.9[66553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:30 compute-0 sudo[66551]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:31 compute-0 sudo[66674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzsdxumqnzlhysrvbbbmxurzqtkqxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838690.4458752-407-245954989317538/AnsiballZ_copy.py'
Jan 31 05:51:31 compute-0 sudo[66674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:31 compute-0 python3.9[66676]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838690.4458752-407-245954989317538/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:31 compute-0 sudo[66674]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:31 compute-0 sudo[66826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkvshcysrgcsbomdxqjhukpyraxphzeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838691.6003463-422-95398020796212/AnsiballZ_stat.py'
Jan 31 05:51:31 compute-0 sudo[66826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:32 compute-0 python3.9[66828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:32 compute-0 sudo[66826]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:32 compute-0 sudo[66949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlhhqfzagskillvaejjlcyvoqyvmlbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838691.6003463-422-95398020796212/AnsiballZ_copy.py'
Jan 31 05:51:32 compute-0 sudo[66949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:32 compute-0 python3.9[66951]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838691.6003463-422-95398020796212/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:32 compute-0 sudo[66949]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:33 compute-0 sudo[67101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujvxgipjovqwivcgsylrehekmhsmxvtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838692.7577834-437-265613480412292/AnsiballZ_stat.py'
Jan 31 05:51:33 compute-0 sudo[67101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:33 compute-0 python3.9[67103]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:51:33 compute-0 sudo[67101]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:33 compute-0 sudo[67224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydrvqluqoucsqgdtukvlzduefgahalue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838692.7577834-437-265613480412292/AnsiballZ_copy.py'
Jan 31 05:51:33 compute-0 sudo[67224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:33 compute-0 python3.9[67226]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769838692.7577834-437-265613480412292/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:33 compute-0 sudo[67224]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:34 compute-0 sudo[67376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppehmxuuypmsgqairkblyylbrnvynydg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838694.0301478-452-65891641814509/AnsiballZ_file.py'
Jan 31 05:51:34 compute-0 sudo[67376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:34 compute-0 python3.9[67378]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:34 compute-0 sudo[67376]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:35 compute-0 sudo[67528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urvgyppmoeofignbvgdqwmfwvihokfcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838694.7432342-460-210674425953332/AnsiballZ_command.py'
Jan 31 05:51:35 compute-0 sudo[67528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:35 compute-0 python3.9[67530]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:51:35 compute-0 sudo[67528]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:36 compute-0 sudo[67687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjoceielxppkootbctqmrdbqucqclrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838695.5471973-468-95115273539936/AnsiballZ_blockinfile.py'
Jan 31 05:51:36 compute-0 sudo[67687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:36 compute-0 python3.9[67689]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:36 compute-0 sudo[67687]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:36 compute-0 sudo[67840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gygfqwmnznyivolcicyyuddphzerwswv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838696.407491-477-30564648278664/AnsiballZ_file.py'
Jan 31 05:51:36 compute-0 sudo[67840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:36 compute-0 python3.9[67842]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:36 compute-0 sudo[67840]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:37 compute-0 sudo[67992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtbigfklxawbhuruergzpywushxmgzap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838697.077184-477-207717037604814/AnsiballZ_file.py'
Jan 31 05:51:37 compute-0 sudo[67992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:37 compute-0 python3.9[67994]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:37 compute-0 sudo[67992]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:38 compute-0 sudo[68144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdechegkuyknaqvwhfwzlsjocvyzcugz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838697.7694829-492-20042571586629/AnsiballZ_mount.py'
Jan 31 05:51:38 compute-0 sudo[68144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:38 compute-0 python3.9[68146]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 05:51:38 compute-0 sudo[68144]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:39 compute-0 sudo[68297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnajfhljcgzamivsjknujwvhtvrhwliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838698.691898-492-172571511315722/AnsiballZ_mount.py'
Jan 31 05:51:39 compute-0 sudo[68297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:39 compute-0 python3.9[68299]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 05:51:39 compute-0 sudo[68297]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:39 compute-0 sshd-session[59095]: Connection closed by 192.168.122.30 port 56058
Jan 31 05:51:39 compute-0 sshd-session[59092]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:51:39 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 05:51:39 compute-0 systemd[1]: session-13.scope: Consumed 32.822s CPU time.
Jan 31 05:51:39 compute-0 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Jan 31 05:51:39 compute-0 systemd-logind[797]: Removed session 13.
Jan 31 05:51:45 compute-0 sshd-session[68325]: Accepted publickey for zuul from 192.168.122.30 port 53206 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:51:45 compute-0 systemd-logind[797]: New session 14 of user zuul.
Jan 31 05:51:45 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 31 05:51:45 compute-0 sshd-session[68325]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:51:45 compute-0 sudo[68478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhyozihxhhsarkhfuwlumlfrofrdvjve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838705.2546527-16-157060097626878/AnsiballZ_tempfile.py'
Jan 31 05:51:45 compute-0 sudo[68478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:46 compute-0 python3.9[68480]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 05:51:46 compute-0 sudo[68478]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:46 compute-0 sudo[68630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvsetinukvucbeducxwzhefahpazqwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838706.2447884-28-245277067385549/AnsiballZ_stat.py'
Jan 31 05:51:46 compute-0 sudo[68630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:46 compute-0 python3.9[68632]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:51:46 compute-0 sudo[68630]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:47 compute-0 sudo[68782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvcsdcrupwgjnfvhfswetoiojqwrumac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838707.1044245-38-83840212460448/AnsiballZ_setup.py'
Jan 31 05:51:47 compute-0 sudo[68782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:47 compute-0 python3.9[68784]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:51:48 compute-0 sudo[68782]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:48 compute-0 sudo[68934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntaaygtfjknuzrkkzcqolhuwlxdxgmvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838708.2096462-47-54983710335387/AnsiballZ_blockinfile.py'
Jan 31 05:51:48 compute-0 sudo[68934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:48 compute-0 python3.9[68936]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0hYuRiON/npbsK3YxZQ0e0GXxHvRZX51KfYqA1GKj0gQ2C3H68tgNEbNyr0sftfDPuhYj51H0/ArHvFJ19lm6yn/wR3usRFJekl2qXu9gaIBXIgezD8brSkp872zSISy5AqDV8I4WgjoqXF0YuowEtqDGnj5xTi5pyh8qVeV2Y500OBmmCqYA/n4SGP02fF2Lho3j2MIWLe8oJ7/JBkYmpjsHeKUMD+7iv0LDla/fEYTiq9gjci/Lo8O+t31VKVNjRntj/p8Wo+0uPzfw3dePHKFRC1sg+aMG940YVUyRsDKiHOCZrditEGnrBcLep2TyDO4GzaAE6Tg1D6qLztki2H45FAhYE1dIxodEi8bdo6wH1Ss8vIdez8pkFlW6FTObkLxh00QwTolJ+rMZkmuerAkfYFh8HuEmSa85VCdGrRwosjOAQIlJv4ONNSo4xwyI0/Ckvw80IWv722q4aSUzN06SLnHK5RtyPrGKBhYX1zbKPzTysGB7oaZU+/jzVW0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPah6YYl2J1cbbMzkFDMKRbiSHoV+FPnQcDnTDMFvGI
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLg2p/KWi5iAoiT5fV6/vO4iRSCLXzVhh5LWjjpqbjNRY1tST3/3JaBg2W+zT/5ijaf/FaSVjU4iMjimCFU3BTU=
                                             create=True mode=0644 path=/tmp/ansible.8gsbf8n0 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:48 compute-0 sudo[68934]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:49 compute-0 sudo[69086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjhdxohbbpoyxyvakwzkkneblyyqmjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838709.0855021-55-118773620715473/AnsiballZ_command.py'
Jan 31 05:51:49 compute-0 sudo[69086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:49 compute-0 python3.9[69088]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8gsbf8n0' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:51:49 compute-0 sudo[69086]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:50 compute-0 sudo[69240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msbhuujrlccfqhlwjsncnkqzrbzloqmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838709.9443107-63-86077381860678/AnsiballZ_file.py'
Jan 31 05:51:50 compute-0 sudo[69240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:50 compute-0 python3.9[69242]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8gsbf8n0 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:51:50 compute-0 sudo[69240]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:50 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 05:51:50 compute-0 sshd-session[68328]: Connection closed by 192.168.122.30 port 53206
Jan 31 05:51:50 compute-0 sshd-session[68325]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:51:51 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 05:51:51 compute-0 systemd[1]: session-14.scope: Consumed 3.421s CPU time.
Jan 31 05:51:51 compute-0 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Jan 31 05:51:51 compute-0 systemd-logind[797]: Removed session 14.
Jan 31 05:51:56 compute-0 sshd-session[69269]: Accepted publickey for zuul from 192.168.122.30 port 44484 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:51:56 compute-0 systemd-logind[797]: New session 15 of user zuul.
Jan 31 05:51:56 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 31 05:51:56 compute-0 sshd-session[69269]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:51:57 compute-0 python3.9[69422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:51:58 compute-0 sudo[69576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narioeryorwbetkpmkzimastofccvubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838717.771979-27-217814829770297/AnsiballZ_systemd.py'
Jan 31 05:51:58 compute-0 sudo[69576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:58 compute-0 python3.9[69578]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 05:51:58 compute-0 sudo[69576]: pam_unix(sudo:session): session closed for user root
Jan 31 05:51:59 compute-0 sudo[69730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzegkpnbcnhiyfcxnokiyfxwnxpiqtbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838718.7751205-35-123492380573074/AnsiballZ_systemd.py'
Jan 31 05:51:59 compute-0 sudo[69730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:51:59 compute-0 python3.9[69732]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 05:51:59 compute-0 sudo[69730]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:00 compute-0 sudo[69883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzfpmwmwkmifpzavrtoqjfzwyqlkfemy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838719.6484206-44-85398467362304/AnsiballZ_command.py'
Jan 31 05:52:00 compute-0 sudo[69883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:00 compute-0 python3.9[69885]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:00 compute-0 sudo[69883]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:00 compute-0 sudo[70036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-basdlsdqicyebvdlvzwinvikmaqnwyyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838720.4056041-52-90823266060318/AnsiballZ_stat.py'
Jan 31 05:52:00 compute-0 sudo[70036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:01 compute-0 python3.9[70038]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:52:01 compute-0 sudo[70036]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:01 compute-0 sudo[70190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcipvdabhgndgrmstzigcygraflznwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838721.4178565-60-60081518921597/AnsiballZ_command.py'
Jan 31 05:52:01 compute-0 sudo[70190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:01 compute-0 python3.9[70192]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:01 compute-0 sudo[70190]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:02 compute-0 sudo[70345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekrwmahvvidofnnypvlizbdqghmgrfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838722.175818-68-4649208219484/AnsiballZ_file.py'
Jan 31 05:52:02 compute-0 sudo[70345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:02 compute-0 python3.9[70347]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:03 compute-0 sudo[70345]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:03 compute-0 sshd-session[69272]: Connection closed by 192.168.122.30 port 44484
Jan 31 05:52:03 compute-0 sshd-session[69269]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:52:03 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 05:52:03 compute-0 systemd[1]: session-15.scope: Consumed 4.094s CPU time.
Jan 31 05:52:03 compute-0 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Jan 31 05:52:03 compute-0 systemd-logind[797]: Removed session 15.
Jan 31 05:52:08 compute-0 sshd-session[70372]: Accepted publickey for zuul from 192.168.122.30 port 55192 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:52:08 compute-0 systemd-logind[797]: New session 16 of user zuul.
Jan 31 05:52:08 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 31 05:52:08 compute-0 sshd-session[70372]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:52:09 compute-0 python3.9[70525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:52:10 compute-0 sudo[70679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtilyfbpnaxtfcqpyymzlpriavldofud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838729.9266279-29-207798691501805/AnsiballZ_setup.py'
Jan 31 05:52:10 compute-0 sudo[70679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:10 compute-0 python3.9[70681]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:52:10 compute-0 sudo[70679]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:11 compute-0 sudo[70763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fedskmqxancfinaenizawqcvrfhytsxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769838729.9266279-29-207798691501805/AnsiballZ_dnf.py'
Jan 31 05:52:11 compute-0 sudo[70763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:11 compute-0 python3.9[70765]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 05:52:12 compute-0 sudo[70763]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:13 compute-0 python3.9[70916]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:14 compute-0 python3.9[71067]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 05:52:15 compute-0 python3.9[71217]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:52:15 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:52:16 compute-0 python3.9[71368]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:52:16 compute-0 sshd-session[70375]: Connection closed by 192.168.122.30 port 55192
Jan 31 05:52:16 compute-0 sshd-session[70372]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:52:16 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 05:52:16 compute-0 systemd[1]: session-16.scope: Consumed 5.391s CPU time.
Jan 31 05:52:16 compute-0 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Jan 31 05:52:16 compute-0 systemd-logind[797]: Removed session 16.
Jan 31 05:52:23 compute-0 sshd-session[71393]: Accepted publickey for zuul from 38.102.83.111 port 45428 ssh2: RSA SHA256:nLI9W8FlAkHSY0pJrzeKIqjEMoolvwyb6dlyVD5ZrF8
Jan 31 05:52:23 compute-0 systemd-logind[797]: New session 17 of user zuul.
Jan 31 05:52:23 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 31 05:52:23 compute-0 sshd-session[71393]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:52:24 compute-0 sudo[71469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlpnkyhcddlnfqhpspsucqhdcmhztyvy ; /usr/bin/python3'
Jan 31 05:52:24 compute-0 sudo[71469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:24 compute-0 useradd[71473]: new group: name=ceph-admin, GID=42478
Jan 31 05:52:24 compute-0 useradd[71473]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 05:52:24 compute-0 sudo[71469]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:24 compute-0 sudo[71555]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpewvgpwdxkojmkawduuwrrvhtotvlej ; /usr/bin/python3'
Jan 31 05:52:24 compute-0 sudo[71555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:24 compute-0 sudo[71555]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:25 compute-0 sudo[71628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjgnluxlgtqmpcexczffanvczbaneid ; /usr/bin/python3'
Jan 31 05:52:25 compute-0 sudo[71628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:25 compute-0 sudo[71628]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:25 compute-0 sudo[71678]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kilbmrowdgglcsyijrukdjeubitdjyhb ; /usr/bin/python3'
Jan 31 05:52:25 compute-0 sudo[71678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:25 compute-0 sudo[71678]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:25 compute-0 sudo[71704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosobwmrgyzykemtzzyplpufapbihapv ; /usr/bin/python3'
Jan 31 05:52:25 compute-0 sudo[71704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:25 compute-0 sudo[71704]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:26 compute-0 sudo[71730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dimxndswpuovleibyfwietggxacsacet ; /usr/bin/python3'
Jan 31 05:52:26 compute-0 sudo[71730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:26 compute-0 sudo[71730]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:26 compute-0 sudo[71756]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxmtmxqmfxcrnsdmxsupyqkwneueilwe ; /usr/bin/python3'
Jan 31 05:52:26 compute-0 sudo[71756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:26 compute-0 sudo[71756]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:26 compute-0 sudo[71834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqtiwwlawloqbxnjtfnggqximrkysin ; /usr/bin/python3'
Jan 31 05:52:26 compute-0 sudo[71834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:27 compute-0 sudo[71834]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:27 compute-0 sudo[71907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dejcgrgnqojkmccrihwycidkajjgjjlh ; /usr/bin/python3'
Jan 31 05:52:27 compute-0 sudo[71907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:27 compute-0 sudo[71907]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:27 compute-0 sudo[72009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfyzcufsytaxuxkbdgcaxpxogmwogcy ; /usr/bin/python3'
Jan 31 05:52:27 compute-0 sudo[72009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:27 compute-0 sudo[72009]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:27 compute-0 sudo[72082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luhxrtoimmydmoklhhfkowibimxuoxvd ; /usr/bin/python3'
Jan 31 05:52:27 compute-0 sudo[72082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:28 compute-0 sudo[72082]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:28 compute-0 sudo[72132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrnqwlfyqmifhishuvekunvkffymmayc ; /usr/bin/python3'
Jan 31 05:52:28 compute-0 sudo[72132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:28 compute-0 python3[72134]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:52:29 compute-0 sudo[72132]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:29 compute-0 sudo[72227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvikofngykzskmrvfahiltbykzhyuvpe ; /usr/bin/python3'
Jan 31 05:52:29 compute-0 sudo[72227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:30 compute-0 python3[72229]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:52:31 compute-0 sudo[72227]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:31 compute-0 sudo[72254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqgbfzkfaujheizoqpblydyhwuamsid ; /usr/bin/python3'
Jan 31 05:52:31 compute-0 sudo[72254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:31 compute-0 python3[72256]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:31 compute-0 sudo[72254]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:31 compute-0 sudo[72280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axxvzytpvpqdgekqgzqtxqyykipzhhuj ; /usr/bin/python3'
Jan 31 05:52:31 compute-0 sudo[72280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:31 compute-0 python3[72282]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:31 compute-0 kernel: loop: module loaded
Jan 31 05:52:31 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 31 05:52:31 compute-0 sudo[72280]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:32 compute-0 sudo[72315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqebwhqtgmfkihnxuathwdhdimhcxvl ; /usr/bin/python3'
Jan 31 05:52:32 compute-0 sudo[72315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:32 compute-0 python3[72317]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:32 compute-0 lvm[72320]: PV /dev/loop3 not used.
Jan 31 05:52:32 compute-0 lvm[72329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:52:32 compute-0 sudo[72315]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:32 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 05:52:32 compute-0 lvm[72331]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 05:52:32 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 05:52:32 compute-0 sudo[72408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqxioclltiodnyeimuzkmhatbkaroyqw ; /usr/bin/python3'
Jan 31 05:52:32 compute-0 sudo[72408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:32 compute-0 python3[72410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:52:32 compute-0 sudo[72408]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:33 compute-0 sudo[72481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uknahhswlvlrjxqcbftokvtzxpdlpmsm ; /usr/bin/python3'
Jan 31 05:52:33 compute-0 sudo[72481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:33 compute-0 python3[72483]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838752.6013281-36191-267776283107157/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:33 compute-0 sudo[72481]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:33 compute-0 sudo[72531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbitmcwthdkkhunmwoxdyqqfyfdibvhr ; /usr/bin/python3'
Jan 31 05:52:33 compute-0 sudo[72531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:33 compute-0 python3[72533]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:52:33 compute-0 systemd[1]: Reloading.
Jan 31 05:52:34 compute-0 systemd-sysv-generator[72562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:52:34 compute-0 systemd-rc-local-generator[72559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:52:34 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 05:52:34 compute-0 bash[72574]: /dev/loop3: [64513]:4329574 (/var/lib/ceph-osd-0.img)
Jan 31 05:52:34 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 05:52:34 compute-0 lvm[72575]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:52:34 compute-0 lvm[72575]: VG ceph_vg0 finished
Jan 31 05:52:34 compute-0 sudo[72531]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:34 compute-0 sudo[72599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nafkdpkciebxzuotripnzwsksvtylwmm ; /usr/bin/python3'
Jan 31 05:52:34 compute-0 sudo[72599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:34 compute-0 python3[72601]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:52:35 compute-0 sudo[72599]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:35 compute-0 sudo[72626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okayyirogkkfsswkntkpbwuhkwhdbiex ; /usr/bin/python3'
Jan 31 05:52:35 compute-0 sudo[72626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:35 compute-0 python3[72628]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:36 compute-0 sudo[72626]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:36 compute-0 sudo[72652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbqsuyygdiznabghqpmmzmegqjctdtwi ; /usr/bin/python3'
Jan 31 05:52:36 compute-0 sudo[72652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:36 compute-0 python3[72654]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:36 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 31 05:52:36 compute-0 sudo[72652]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:36 compute-0 sudo[72684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcmhfjsferpmlbqpcrhvkawqdnfucabj ; /usr/bin/python3'
Jan 31 05:52:36 compute-0 sudo[72684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:36 compute-0 python3[72686]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:36 compute-0 lvm[72689]: PV /dev/loop4 not used.
Jan 31 05:52:36 compute-0 lvm[72699]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:52:36 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 31 05:52:36 compute-0 sudo[72684]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:36 compute-0 lvm[72701]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 31 05:52:37 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 31 05:52:37 compute-0 sudo[72777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whywopblpurgpcezvgxxinxswsplzwgw ; /usr/bin/python3'
Jan 31 05:52:37 compute-0 sudo[72777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:37 compute-0 python3[72779]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:52:37 compute-0 sudo[72777]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:37 compute-0 sudo[72850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zllcjgrunidstascucmbijejzbgcmxcw ; /usr/bin/python3'
Jan 31 05:52:37 compute-0 sudo[72850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:37 compute-0 python3[72852]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838757.1250148-36218-190618842018179/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:37 compute-0 sudo[72850]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:38 compute-0 sudo[72900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhktzusmquomwddpwpwocvsrdahjpqlt ; /usr/bin/python3'
Jan 31 05:52:38 compute-0 sudo[72900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:38 compute-0 python3[72902]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:52:38 compute-0 systemd[1]: Reloading.
Jan 31 05:52:38 compute-0 systemd-rc-local-generator[72930]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:52:38 compute-0 systemd-sysv-generator[72936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:52:38 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 05:52:38 compute-0 bash[72943]: /dev/loop4: [64513]:4355724 (/var/lib/ceph-osd-1.img)
Jan 31 05:52:38 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 05:52:38 compute-0 sudo[72900]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:38 compute-0 lvm[72944]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:52:38 compute-0 lvm[72944]: VG ceph_vg1 finished
Jan 31 05:52:38 compute-0 sudo[72968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eglxnpuncpmkmvyjjryjtajwxbekuxdw ; /usr/bin/python3'
Jan 31 05:52:38 compute-0 sudo[72968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:38 compute-0 python3[72970]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:52:40 compute-0 sudo[72968]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:40 compute-0 sudo[72995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poliheummjvofhpglszaujneozajcbha ; /usr/bin/python3'
Jan 31 05:52:40 compute-0 sudo[72995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:40 compute-0 python3[72997]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:40 compute-0 sudo[72995]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:40 compute-0 sudo[73021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmpmwkmnkyiwhuqpunkoiyzmemmubuoi ; /usr/bin/python3'
Jan 31 05:52:40 compute-0 sudo[73021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:40 compute-0 python3[73023]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:40 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 31 05:52:40 compute-0 sudo[73021]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:40 compute-0 sudo[73053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpgnvymrkadamyqapqhrvsubzykoqrxh ; /usr/bin/python3'
Jan 31 05:52:40 compute-0 sudo[73053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:41 compute-0 python3[73055]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:41 compute-0 lvm[73058]: PV /dev/loop5 not used.
Jan 31 05:52:41 compute-0 lvm[73068]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:52:41 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 31 05:52:41 compute-0 sudo[73053]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:41 compute-0 lvm[73070]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 31 05:52:41 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 31 05:52:41 compute-0 sudo[73146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqzxarsyvrizbdjddtnjbspbvaczrtv ; /usr/bin/python3'
Jan 31 05:52:41 compute-0 sudo[73146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:41 compute-0 python3[73148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:52:41 compute-0 sudo[73146]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:41 compute-0 sudo[73219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owigrjtoyvitmagtigvodypclozwlohb ; /usr/bin/python3'
Jan 31 05:52:41 compute-0 sudo[73219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:42 compute-0 python3[73221]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838761.4876235-36245-80550286073394/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:42 compute-0 sudo[73219]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:42 compute-0 sudo[73269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suagzrrpyzdgpkymlmanbrhqdxgibqqy ; /usr/bin/python3'
Jan 31 05:52:42 compute-0 sudo[73269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:42 compute-0 python3[73271]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:52:42 compute-0 systemd[1]: Reloading.
Jan 31 05:52:42 compute-0 systemd-sysv-generator[73305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:52:42 compute-0 systemd-rc-local-generator[73301]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:52:42 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 05:52:42 compute-0 bash[73312]: /dev/loop5: [64513]:4355726 (/var/lib/ceph-osd-2.img)
Jan 31 05:52:42 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 05:52:42 compute-0 lvm[73313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:52:42 compute-0 lvm[73313]: VG ceph_vg2 finished
Jan 31 05:52:42 compute-0 sudo[73269]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:44 compute-0 python3[73337]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:52:46 compute-0 sudo[73428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxliwtelupquxwylayoalitlchqkxwy ; /usr/bin/python3'
Jan 31 05:52:46 compute-0 sudo[73428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:46 compute-0 chronyd[58611]: Selected source 158.69.193.108 (pool.ntp.org)
Jan 31 05:52:46 compute-0 python3[73430]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:52:48 compute-0 sudo[73428]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:48 compute-0 sudo[73485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pagknjdotrjqtnyxmekygexzzudbhxwu ; /usr/bin/python3'
Jan 31 05:52:48 compute-0 sudo[73485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:49 compute-0 python3[73487]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 05:52:51 compute-0 groupadd[73497]: group added to /etc/group: name=cephadm, GID=993
Jan 31 05:52:51 compute-0 groupadd[73497]: group added to /etc/gshadow: name=cephadm
Jan 31 05:52:51 compute-0 groupadd[73497]: new group: name=cephadm, GID=993
Jan 31 05:52:51 compute-0 useradd[73504]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 31 05:52:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 05:52:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 05:52:52 compute-0 sudo[73485]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:52 compute-0 sudo[73603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjawkqlrnczbdjkuzoojnqltdtulsva ; /usr/bin/python3'
Jan 31 05:52:52 compute-0 sudo[73603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 05:52:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 05:52:52 compute-0 systemd[1]: run-r1fb1ee2744cf4433a930e3dba38b3b37.service: Deactivated successfully.
Jan 31 05:52:52 compute-0 python3[73605]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:52 compute-0 sudo[73603]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:52 compute-0 sudo[73632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klvapbqxdbbewgijbczncadpiaaakmye ; /usr/bin/python3'
Jan 31 05:52:52 compute-0 sudo[73632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:53 compute-0 python3[73634]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:52:53 compute-0 sudo[73632]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:53 compute-0 sudo[73671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnijcsodvumuewohwwsvyoyxronqpxhy ; /usr/bin/python3'
Jan 31 05:52:53 compute-0 sudo[73671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:54 compute-0 python3[73673]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:54 compute-0 sudo[73671]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:54 compute-0 sudo[73697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bypltecrhtinyozuncszdsaeoswzdvph ; /usr/bin/python3'
Jan 31 05:52:54 compute-0 sudo[73697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:54 compute-0 python3[73699]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:54 compute-0 sudo[73697]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:54 compute-0 sudo[73775]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azrbmcbrwwlznsyiyqmtikknxisybdkg ; /usr/bin/python3'
Jan 31 05:52:54 compute-0 sudo[73775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:54 compute-0 python3[73777]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:52:54 compute-0 sudo[73775]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:55 compute-0 sudo[73848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofyvltzxxpqkhujiraoocvqwncrascdu ; /usr/bin/python3'
Jan 31 05:52:55 compute-0 sudo[73848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:55 compute-0 python3[73850]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838774.6798236-36393-157058659760171/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:55 compute-0 sudo[73848]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:55 compute-0 sudo[73950]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfvxbgxojsdvcdekuzogphxcjtusvmd ; /usr/bin/python3'
Jan 31 05:52:55 compute-0 sudo[73950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:56 compute-0 python3[73952]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:52:56 compute-0 sudo[73950]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:56 compute-0 sudo[74023]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmclchnmfesxifpaggxxrtsogwprnos ; /usr/bin/python3'
Jan 31 05:52:56 compute-0 sudo[74023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:56 compute-0 python3[74025]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838775.7572265-36411-35789419095072/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:52:56 compute-0 sudo[74023]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:56 compute-0 sudo[74073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnmzoeprpyamnftlvmelbcfcepybhpn ; /usr/bin/python3'
Jan 31 05:52:56 compute-0 sudo[74073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:56 compute-0 python3[74075]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:56 compute-0 sudo[74073]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:56 compute-0 sudo[74101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzneyaukkmnikzfiuzxioyfapuaqtgcx ; /usr/bin/python3'
Jan 31 05:52:56 compute-0 sudo[74101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:57 compute-0 python3[74103]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:57 compute-0 sudo[74101]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:57 compute-0 sudo[74129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpfghqbweeqnkgoqzdvmmvndjufyjzk ; /usr/bin/python3'
Jan 31 05:52:57 compute-0 sudo[74129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:57 compute-0 python3[74131]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:57 compute-0 sudo[74129]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:57 compute-0 python3[74157]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:52:58 compute-0 sudo[74181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqoryctxbgxmnfymffoycpjwpqqauboy ; /usr/bin/python3'
Jan 31 05:52:58 compute-0 sudo[74181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:52:58 compute-0 python3[74183]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:52:58 compute-0 sshd-session[74187]: Accepted publickey for ceph-admin from 192.168.122.100 port 58900 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:52:58 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 05:52:58 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 05:52:58 compute-0 systemd-logind[797]: New session 18 of user ceph-admin.
Jan 31 05:52:58 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 05:52:58 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 05:52:58 compute-0 systemd[74191]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:52:58 compute-0 systemd[74191]: Queued start job for default target Main User Target.
Jan 31 05:52:58 compute-0 systemd[74191]: Created slice User Application Slice.
Jan 31 05:52:58 compute-0 systemd[74191]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 05:52:58 compute-0 systemd[74191]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 05:52:58 compute-0 systemd[74191]: Reached target Paths.
Jan 31 05:52:58 compute-0 systemd[74191]: Reached target Timers.
Jan 31 05:52:58 compute-0 systemd[74191]: Starting D-Bus User Message Bus Socket...
Jan 31 05:52:58 compute-0 systemd[74191]: Starting Create User's Volatile Files and Directories...
Jan 31 05:52:58 compute-0 systemd[74191]: Finished Create User's Volatile Files and Directories.
Jan 31 05:52:58 compute-0 systemd[74191]: Listening on D-Bus User Message Bus Socket.
Jan 31 05:52:58 compute-0 systemd[74191]: Reached target Sockets.
Jan 31 05:52:58 compute-0 systemd[74191]: Reached target Basic System.
Jan 31 05:52:58 compute-0 systemd[74191]: Reached target Main User Target.
Jan 31 05:52:58 compute-0 systemd[74191]: Startup finished in 130ms.
Jan 31 05:52:58 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 05:52:58 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Jan 31 05:52:58 compute-0 sshd-session[74187]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:52:58 compute-0 sudo[74207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 31 05:52:58 compute-0 sudo[74207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:52:58 compute-0 sudo[74207]: pam_unix(sudo:session): session closed for user root
Jan 31 05:52:58 compute-0 sshd-session[74206]: Received disconnect from 192.168.122.100 port 58900:11: disconnected by user
Jan 31 05:52:58 compute-0 sshd-session[74206]: Disconnected from user ceph-admin 192.168.122.100 port 58900
Jan 31 05:52:58 compute-0 sshd-session[74187]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 31 05:52:58 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 05:52:58 compute-0 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Jan 31 05:52:58 compute-0 systemd-logind[797]: Removed session 18.
Jan 31 05:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1474462287-merged.mount: Deactivated successfully.
Jan 31 05:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1474462287-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 05:53:09 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 05:53:09 compute-0 systemd[74191]: Activating special unit Exit the Session...
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped target Main User Target.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped target Basic System.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped target Paths.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped target Sockets.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped target Timers.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 05:53:09 compute-0 systemd[74191]: Closed D-Bus User Message Bus Socket.
Jan 31 05:53:09 compute-0 systemd[74191]: Stopped Create User's Volatile Files and Directories.
Jan 31 05:53:09 compute-0 systemd[74191]: Removed slice User Application Slice.
Jan 31 05:53:09 compute-0 systemd[74191]: Reached target Shutdown.
Jan 31 05:53:09 compute-0 systemd[74191]: Finished Exit the Session.
Jan 31 05:53:09 compute-0 systemd[74191]: Reached target Exit the Session.
Jan 31 05:53:09 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 05:53:09 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 05:53:09 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 05:53:09 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 05:53:09 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 05:53:09 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 05:53:09 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 05:53:19 compute-0 podman[74284]: 2026-01-31 05:53:19.515472763 +0000 UTC m=+20.361522718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.583286688 +0000 UTC m=+0.046900318 container create f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:19 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 05:53:19 compute-0 systemd[1]: Started libpod-conmon-f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0.scope.
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.557504734 +0000 UTC m=+0.021118424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.69874591 +0000 UTC m=+0.162359570 container init f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.70870176 +0000 UTC m=+0.172315390 container start f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.712654181 +0000 UTC m=+0.176267881 container attach f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:19 compute-0 suspicious_jackson[74381]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.823675928 +0000 UTC m=+0.287289528 container died f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:19 compute-0 systemd[1]: libpod-f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0.scope: Deactivated successfully.
Jan 31 05:53:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d5ea8cb40858d72b514d24fc85e91c66aabdd02f26348d9da84c8975c4bae8-merged.mount: Deactivated successfully.
Jan 31 05:53:19 compute-0 podman[74365]: 2026-01-31 05:53:19.861582763 +0000 UTC m=+0.325196383 container remove f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0 (image=quay.io/ceph/ceph:v20, name=suspicious_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:19 compute-0 systemd[1]: libpod-conmon-f5d02305f74391ee3eec03e684a5868ae2a2597361ce91d39b0f1a5357fe56a0.scope: Deactivated successfully.
Jan 31 05:53:19 compute-0 podman[74398]: 2026-01-31 05:53:19.927099163 +0000 UTC m=+0.049618585 container create e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:19 compute-0 systemd[1]: Started libpod-conmon-e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd.scope.
Jan 31 05:53:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:19 compute-0 podman[74398]: 2026-01-31 05:53:19.898831319 +0000 UTC m=+0.021350801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:20 compute-0 podman[74398]: 2026-01-31 05:53:20.000506064 +0000 UTC m=+0.123025456 container init e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:53:20 compute-0 podman[74398]: 2026-01-31 05:53:20.005851494 +0000 UTC m=+0.128370886 container start e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:20 compute-0 elated_wilbur[74415]: 167 167
Jan 31 05:53:20 compute-0 systemd[1]: libpod-e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74398]: 2026-01-31 05:53:20.010572277 +0000 UTC m=+0.133091679 container attach e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:53:20 compute-0 podman[74398]: 2026-01-31 05:53:20.011258446 +0000 UTC m=+0.133777878 container died e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:20 compute-0 podman[74398]: 2026-01-31 05:53:20.047593607 +0000 UTC m=+0.170113009 container remove e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd (image=quay.io/ceph/ceph:v20, name=elated_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:53:20 compute-0 systemd[1]: libpod-conmon-e312037a17b4086e8686802eb24776079d21caa065e071016dd5f4fa3092edbd.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.103022483 +0000 UTC m=+0.041334852 container create dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:53:20 compute-0 systemd[1]: Started libpod-conmon-dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea.scope.
Jan 31 05:53:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.160484927 +0000 UTC m=+0.098797326 container init dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.163396959 +0000 UTC m=+0.101709298 container start dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.168553213 +0000 UTC m=+0.106865582 container attach dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.083742042 +0000 UTC m=+0.022054391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:20 compute-0 goofy_gates[74448]: AQDQmH1pHKbYChAAj5ou9ZliRyI74jZF6tpByQ==
Jan 31 05:53:20 compute-0 systemd[1]: libpod-dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.184705577 +0000 UTC m=+0.123017946 container died dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:53:20 compute-0 podman[74432]: 2026-01-31 05:53:20.223793165 +0000 UTC m=+0.162105494 container remove dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea (image=quay.io/ceph/ceph:v20, name=goofy_gates, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:53:20 compute-0 systemd[1]: libpod-conmon-dbf1b3964b652741915afac4fbe3f2afb6dab625127d7ed66a7413d7eef9c1ea.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.279642883 +0000 UTC m=+0.042392981 container create 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:20 compute-0 systemd[1]: Started libpod-conmon-17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448.scope.
Jan 31 05:53:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.257038378 +0000 UTC m=+0.019788526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.35431495 +0000 UTC m=+0.117065108 container init 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.358484977 +0000 UTC m=+0.121235075 container start 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.362581942 +0000 UTC m=+0.125332040 container attach 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:20 compute-0 wonderful_turing[74486]: AQDQmH1pZu94FhAArD5j5j9lWOnphb/klckTQQ==
Jan 31 05:53:20 compute-0 systemd[1]: libpod-17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.379436736 +0000 UTC m=+0.142186834 container died 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:53:20 compute-0 podman[74470]: 2026-01-31 05:53:20.428214565 +0000 UTC m=+0.190964673 container remove 17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448 (image=quay.io/ceph/ceph:v20, name=wonderful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:20 compute-0 systemd[1]: libpod-conmon-17d1b55f05a772a1fdaca6a4f28dce4b142abe23fc5d4aa1782c3680c8b8e448.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.4974604 +0000 UTC m=+0.051413035 container create 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:53:20 compute-0 systemd[1]: Started libpod-conmon-71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374.scope.
Jan 31 05:53:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.568169536 +0000 UTC m=+0.122122191 container init 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.477783197 +0000 UTC m=+0.031735832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.57509032 +0000 UTC m=+0.129042955 container start 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.580331837 +0000 UTC m=+0.134284472 container attach 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:20 compute-0 vigilant_moser[74522]: AQDQmH1pK5giJBAAkOlUJKkX8MzhMoWvnIDT4w==
Jan 31 05:53:20 compute-0 systemd[1]: libpod-71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.612212243 +0000 UTC m=+0.166164868 container died 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d12d5f6cba481a7aab5410a573e875e72a6740e685327794e7bb02a83bee28-merged.mount: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74506]: 2026-01-31 05:53:20.662353661 +0000 UTC m=+0.216306286 container remove 71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374 (image=quay.io/ceph/ceph:v20, name=vigilant_moser, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:20 compute-0 systemd[1]: libpod-conmon-71491cd711fc3e7fd55cbcdfdcb7f4bdf73d4cce84ea6c7247e7ec1511c1e374.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.723720844 +0000 UTC m=+0.045587011 container create 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:53:20 compute-0 systemd[1]: Started libpod-conmon-7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff.scope.
Jan 31 05:53:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87166bc3e0e1ca9d28a6f8772d4e56cb5a5198d63b5c27dd5e98ff255e4803f6/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.697784806 +0000 UTC m=+0.019651033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.807841496 +0000 UTC m=+0.129707733 container init 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.815308596 +0000 UTC m=+0.137174753 container start 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.819345859 +0000 UTC m=+0.141212016 container attach 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:20 compute-0 focused_darwin[74560]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 05:53:20 compute-0 focused_darwin[74560]: setting min_mon_release = tentacle
Jan 31 05:53:20 compute-0 focused_darwin[74560]: /usr/bin/monmaptool: set fsid to 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:20 compute-0 focused_darwin[74560]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 05:53:20 compute-0 systemd[1]: libpod-7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff.scope: Deactivated successfully.
Jan 31 05:53:20 compute-0 podman[74542]: 2026-01-31 05:53:20.872227634 +0000 UTC m=+0.194093791 container died 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:21 compute-0 podman[74542]: 2026-01-31 05:53:21.190871333 +0000 UTC m=+0.512737490 container remove 7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff (image=quay.io/ceph/ceph:v20, name=focused_darwin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:53:21 compute-0 systemd[1]: libpod-conmon-7a6624547df750fc20c66fc3cf1903a4d5be1d1ff080bfed6ef46c4c53e9dbff.scope: Deactivated successfully.
Jan 31 05:53:21 compute-0 podman[74579]: 2026-01-31 05:53:21.268184124 +0000 UTC m=+0.055866780 container create 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:21 compute-0 systemd[1]: Started libpod-conmon-2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e.scope.
Jan 31 05:53:21 compute-0 podman[74579]: 2026-01-31 05:53:21.241793923 +0000 UTC m=+0.029476609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3032315a40bffd1a0ba66c7586a28d6e702ee4c2118ef36ccf98b8ee3e1e9c25/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3032315a40bffd1a0ba66c7586a28d6e702ee4c2118ef36ccf98b8ee3e1e9c25/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3032315a40bffd1a0ba66c7586a28d6e702ee4c2118ef36ccf98b8ee3e1e9c25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3032315a40bffd1a0ba66c7586a28d6e702ee4c2118ef36ccf98b8ee3e1e9c25/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:21 compute-0 podman[74579]: 2026-01-31 05:53:21.393799532 +0000 UTC m=+0.181482228 container init 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:21 compute-0 podman[74579]: 2026-01-31 05:53:21.399994246 +0000 UTC m=+0.187676872 container start 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:53:21 compute-0 podman[74579]: 2026-01-31 05:53:21.405329566 +0000 UTC m=+0.193012272 container attach 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:53:22 compute-0 systemd[1]: libpod-2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e.scope: Deactivated successfully.
Jan 31 05:53:22 compute-0 podman[74579]: 2026-01-31 05:53:22.356799865 +0000 UTC m=+1.144482531 container died 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3032315a40bffd1a0ba66c7586a28d6e702ee4c2118ef36ccf98b8ee3e1e9c25-merged.mount: Deactivated successfully.
Jan 31 05:53:22 compute-0 podman[74579]: 2026-01-31 05:53:22.826931687 +0000 UTC m=+1.614614353 container remove 2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e (image=quay.io/ceph/ceph:v20, name=strange_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:22 compute-0 systemd[1]: libpod-conmon-2f039aa76d62a552267753d9f3b1dce1cd1d1994dad046f4b2bf9700319d2f4e.scope: Deactivated successfully.
Jan 31 05:53:22 compute-0 systemd[1]: Reloading.
Jan 31 05:53:23 compute-0 systemd-rc-local-generator[74662]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:23 compute-0 systemd-sysv-generator[74667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:23 compute-0 systemd[1]: Reloading.
Jan 31 05:53:23 compute-0 systemd-rc-local-generator[74693]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:23 compute-0 systemd-sysv-generator[74697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:23 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 05:53:23 compute-0 systemd[1]: Reloading.
Jan 31 05:53:23 compute-0 systemd-rc-local-generator[74736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:23 compute-0 systemd-sysv-generator[74739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:23 compute-0 systemd[1]: Reached target Ceph cluster 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:23 compute-0 systemd[1]: Reloading.
Jan 31 05:53:23 compute-0 systemd-rc-local-generator[74775]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:23 compute-0 systemd-sysv-generator[74779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:23 compute-0 systemd[1]: Reloading.
Jan 31 05:53:23 compute-0 systemd-rc-local-generator[74814]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:23 compute-0 systemd-sysv-generator[74817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:24 compute-0 systemd[1]: Created slice Slice /system/ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:24 compute-0 systemd[1]: Reached target System Time Set.
Jan 31 05:53:24 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 31 05:53:24 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:24 compute-0 podman[74874]: 2026-01-31 05:53:24.342170649 +0000 UTC m=+0.064820972 container create 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:24 compute-0 podman[74874]: 2026-01-31 05:53:24.297793902 +0000 UTC m=+0.020444255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c32c05f69b01d9f2a8035dcfba0b857efbc8e4787eb368fbf03b20c446aa32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c32c05f69b01d9f2a8035dcfba0b857efbc8e4787eb368fbf03b20c446aa32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c32c05f69b01d9f2a8035dcfba0b857efbc8e4787eb368fbf03b20c446aa32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c32c05f69b01d9f2a8035dcfba0b857efbc8e4787eb368fbf03b20c446aa32/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 podman[74874]: 2026-01-31 05:53:24.606608025 +0000 UTC m=+0.329258378 container init 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:53:24 compute-0 podman[74874]: 2026-01-31 05:53:24.614724143 +0000 UTC m=+0.337374466 container start 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:24 compute-0 bash[74874]: 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21
Jan 31 05:53:24 compute-0 ceph-mon[74893]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: pidfile_write: ignore empty --pid-file
Jan 31 05:53:24 compute-0 systemd[1]: Started Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:24 compute-0 ceph-mon[74893]: load: jerasure load: lrc 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Git sha 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: DB SUMMARY
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: DB Session ID:  03186WLO7DJD9U59BESR
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                                     Options.env: 0x55d3c40e0440
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                                Options.info_log: 0x55d3c62233e0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                                 Options.wal_dir: 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                    Options.write_buffer_manager: 0x55d3c61a2140
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                               Options.row_cache: None
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                              Options.wal_filter: None
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.wal_compression: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.max_background_jobs: 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Compression algorithms supported:
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kZSTD supported: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:           Options.merge_operator: 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:        Options.compaction_filter: None
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d3c61ae600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d3c61938d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.compression: NoCompression
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.num_levels: 7
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bd7b07cd-b6df-4f61-a546-6834a7dc38a0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838804677225, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838804693822, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "03186WLO7DJD9U59BESR", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838804693973, "job": 1, "event": "recovery_finished"}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 05:53:24 compute-0 podman[74915]: 2026-01-31 05:53:24.740471854 +0000 UTC m=+0.027880814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:24 compute-0 podman[74915]: 2026-01-31 05:53:24.864092386 +0000 UTC m=+0.151501266 container create 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d3c61c0e00
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: DB pointer 0x55d3c630c000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:53:24 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d3c61938d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 05:53:24 compute-0 ceph-mon[74893]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@-1(???) e0 preinit fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 05:53:24 compute-0 ceph-mon[74893]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 05:53:24 compute-0 ceph-mon[74893]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : created 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T05:53:21.547506Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864288,os=Linux}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).mds e1 new map
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-31T05:53:24:897973+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 05:53:24 compute-0 systemd[1]: Started libpod-conmon-0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552.scope.
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mkfs 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 05:53:24 compute-0 ceph-mon[74893]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 05:53:24 compute-0 ceph-mon[74893]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07daee62df2db044fa1493893d330b6efee30d9f182161052f42110e4f9cb808/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07daee62df2db044fa1493893d330b6efee30d9f182161052f42110e4f9cb808/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07daee62df2db044fa1493893d330b6efee30d9f182161052f42110e4f9cb808/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:24 compute-0 podman[74915]: 2026-01-31 05:53:24.952428956 +0000 UTC m=+0.239837926 container init 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:24 compute-0 podman[74915]: 2026-01-31 05:53:24.958186858 +0000 UTC m=+0.245595738 container start 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:53:24 compute-0 podman[74915]: 2026-01-31 05:53:24.962405557 +0000 UTC m=+0.249814527 container attach 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/70283128' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:   cluster:
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     id:     797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     health: HEALTH_OK
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:  
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:   services:
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     mon: 1 daemons, quorum compute-0 (age 0.260725s) [leader: compute-0]
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     mgr: no daemons active
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     osd: 0 osds: 0 up, 0 in
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:  
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:   data:
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     pools:   0 pools, 0 pgs
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     objects: 0 objects, 0 B
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     usage:   0 B used, 0 B / 0 B avail
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:     pgs:     
Jan 31 05:53:25 compute-0 gracious_mendel[74948]:  
Jan 31 05:53:25 compute-0 systemd[1]: libpod-0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552.scope: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[74915]: 2026-01-31 05:53:25.169806621 +0000 UTC m=+0.457215491 container died 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-07daee62df2db044fa1493893d330b6efee30d9f182161052f42110e4f9cb808-merged.mount: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[74915]: 2026-01-31 05:53:25.203670382 +0000 UTC m=+0.491079252 container remove 0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552 (image=quay.io/ceph/ceph:v20, name=gracious_mendel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:53:25 compute-0 systemd[1]: libpod-conmon-0fad236ac336bea89f1446019bda4322afe55998ca1fb28689c45367817c2552.scope: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.25985847 +0000 UTC m=+0.042525335 container create ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:25 compute-0 systemd[1]: Started libpod-conmon-ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac.scope.
Jan 31 05:53:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fc4f669738b00aa15f93f8f9f525656a8e5d8b933b54ab7645cea255f90d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fc4f669738b00aa15f93f8f9f525656a8e5d8b933b54ab7645cea255f90d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fc4f669738b00aa15f93f8f9f525656a8e5d8b933b54ab7645cea255f90d4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fc4f669738b00aa15f93f8f9f525656a8e5d8b933b54ab7645cea255f90d4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.237966425 +0000 UTC m=+0.020633350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.34032827 +0000 UTC m=+0.122995155 container init ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.350154206 +0000 UTC m=+0.132821091 container start ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.354442866 +0000 UTC m=+0.137109701 container attach ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/125272236' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:53:25 compute-0 ceph-mon[74893]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/125272236' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 05:53:25 compute-0 confident_feistel[75003]: 
Jan 31 05:53:25 compute-0 confident_feistel[75003]: [global]
Jan 31 05:53:25 compute-0 confident_feistel[75003]:         fsid = 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:25 compute-0 confident_feistel[75003]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 05:53:25 compute-0 confident_feistel[75003]:         osd_crush_chooseleaf_type = 0
Jan 31 05:53:25 compute-0 systemd[1]: libpod-ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac.scope: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.576864262 +0000 UTC m=+0.359531117 container died ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:53:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-592fc4f669738b00aa15f93f8f9f525656a8e5d8b933b54ab7645cea255f90d4-merged.mount: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[74986]: 2026-01-31 05:53:25.619636123 +0000 UTC m=+0.402302998 container remove ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac (image=quay.io/ceph/ceph:v20, name=confident_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:25 compute-0 systemd[1]: libpod-conmon-ccf0c44f1125daf56470e8c7e2b965063e6c622397b25c7851399ea3f03879ac.scope: Deactivated successfully.
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.679617388 +0000 UTC m=+0.044143951 container create d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:25 compute-0 systemd[1]: Started libpod-conmon-d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f.scope.
Jan 31 05:53:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf662047e3be3847cd44685f5c5c297b38de6f379f5257f9f91b1c91d50933f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf662047e3be3847cd44685f5c5c297b38de6f379f5257f9f91b1c91d50933f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf662047e3be3847cd44685f5c5c297b38de6f379f5257f9f91b1c91d50933f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf662047e3be3847cd44685f5c5c297b38de6f379f5257f9f91b1c91d50933f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.65763838 +0000 UTC m=+0.022164993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.758904174 +0000 UTC m=+0.123430717 container init d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.763363599 +0000 UTC m=+0.127890132 container start d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.766644072 +0000 UTC m=+0.131170615 container attach d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741459672' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:53:25 compute-0 ceph-mon[74893]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: monmap epoch 1
Jan 31 05:53:25 compute-0 ceph-mon[74893]: fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:25 compute-0 ceph-mon[74893]: last_changed 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:25 compute-0 ceph-mon[74893]: created 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:25 compute-0 ceph-mon[74893]: min_mon_release 20 (tentacle)
Jan 31 05:53:25 compute-0 ceph-mon[74893]: election_strategy: 1
Jan 31 05:53:25 compute-0 ceph-mon[74893]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 05:53:25 compute-0 ceph-mon[74893]: fsmap 
Jan 31 05:53:25 compute-0 ceph-mon[74893]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 05:53:25 compute-0 ceph-mon[74893]: mgrmap e1: no daemons active
Jan 31 05:53:25 compute-0 ceph-mon[74893]: from='client.? 192.168.122.100:0/70283128' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 05:53:25 compute-0 ceph-mon[74893]: from='client.? 192.168.122.100:0/125272236' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:53:25 compute-0 ceph-mon[74893]: from='client.? 192.168.122.100:0/125272236' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 05:53:25 compute-0 systemd[1]: libpod-d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f.scope: Deactivated successfully.
Jan 31 05:53:25 compute-0 conmon[75058]: conmon d5ac921b1e6d169652ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f.scope/container/memory.events
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.944938718 +0000 UTC m=+0.309465281 container died d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:53:25 compute-0 podman[75041]: 2026-01-31 05:53:25.99665436 +0000 UTC m=+0.361180913 container remove d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f (image=quay.io/ceph/ceph:v20, name=practical_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:53:26 compute-0 systemd[1]: libpod-conmon-d5ac921b1e6d169652aca4455267b880d27b7fdf6a45d24d3ba476e0e5d0c35f.scope: Deactivated successfully.
Jan 31 05:53:26 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:53:26 compute-0 ceph-mon[74893]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 05:53:26 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 05:53:26 compute-0 ceph-mon[74893]: mon.compute-0@0(leader) e1 shutdown
Jan 31 05:53:26 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0[74889]: 2026-01-31T05:53:26.172+0000 7f1d3401f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 05:53:26 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0[74889]: 2026-01-31T05:53:26.172+0000 7f1d3401f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 05:53:26 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 05:53:26 compute-0 ceph-mon[74893]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 05:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf662047e3be3847cd44685f5c5c297b38de6f379f5257f9f91b1c91d50933f-merged.mount: Deactivated successfully.
Jan 31 05:53:26 compute-0 podman[75128]: 2026-01-31 05:53:26.325204827 +0000 UTC m=+0.190187423 container died 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-59c32c05f69b01d9f2a8035dcfba0b857efbc8e4787eb368fbf03b20c446aa32-merged.mount: Deactivated successfully.
Jan 31 05:53:26 compute-0 podman[75128]: 2026-01-31 05:53:26.367660389 +0000 UTC m=+0.232643025 container remove 3e89010af337937628ca00e2894c9d425d358b8c198f22bca5be5fcfa63c1e21 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:53:26 compute-0 bash[75128]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0
Jan 31 05:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 05:53:26 compute-0 systemd[1]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mon.compute-0.service: Deactivated successfully.
Jan 31 05:53:26 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:53:26 compute-0 podman[75231]: 2026-01-31 05:53:26.76045952 +0000 UTC m=+0.048626527 container create 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d810cc04806dbb8bbfe29235216ca2a9e5747d3d3cf20f4990c428796d23857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d810cc04806dbb8bbfe29235216ca2a9e5747d3d3cf20f4990c428796d23857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d810cc04806dbb8bbfe29235216ca2a9e5747d3d3cf20f4990c428796d23857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d810cc04806dbb8bbfe29235216ca2a9e5747d3d3cf20f4990c428796d23857/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 podman[75231]: 2026-01-31 05:53:26.817307266 +0000 UTC m=+0.105474283 container init 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:26 compute-0 podman[75231]: 2026-01-31 05:53:26.820819745 +0000 UTC m=+0.108986742 container start 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:26 compute-0 bash[75231]: 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d
Jan 31 05:53:26 compute-0 podman[75231]: 2026-01-31 05:53:26.742369352 +0000 UTC m=+0.030536329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:26 compute-0 ceph-mon[75251]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: pidfile_write: ignore empty --pid-file
Jan 31 05:53:26 compute-0 ceph-mon[75251]: load: jerasure load: lrc 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Git sha 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: DB SUMMARY
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: DB Session ID:  T9FROEUWTS2FQTYPCQMI
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60223 ; 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                                     Options.env: 0x55e2e4abd440
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                                Options.info_log: 0x55e2e66bbe80
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                                 Options.wal_dir: 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                    Options.write_buffer_manager: 0x55e2e6706140
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                               Options.row_cache: None
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                              Options.wal_filter: None
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.wal_compression: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.max_background_jobs: 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Compression algorithms supported:
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kZSTD supported: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:           Options.merge_operator: 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:        Options.compaction_filter: None
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2e6712a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2e66f78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.compression: NoCompression
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.num_levels: 7
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: bd7b07cd-b6df-4f61-a546-6834a7dc38a0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838806855737, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838806859606, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58422, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55774, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838806, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838806859677, "job": 1, "event": "recovery_finished"}
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e2e6724e00
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: DB pointer 0x55e2e686e000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:53:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.44 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.44 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.75 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.75 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2e66f78d0#2 capacity: 512.00 MB usage: 26.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,25.61 KB,0.0048846%) FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 05:53:26 compute-0 ceph-mon[75251]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???) e1 preinit fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).mds e1 new map
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-31T05:53:24:897973+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 05:53:26 compute-0 ceph-mon[75251]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : created 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 05:53:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 05:53:26 compute-0 podman[75252]: 2026-01-31 05:53:26.903104336 +0000 UTC m=+0.051709034 container create 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: monmap epoch 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:26 compute-0 ceph-mon[75251]: last_changed 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: created 2026-01-31T05:53:20.867409+0000
Jan 31 05:53:26 compute-0 ceph-mon[75251]: min_mon_release 20 (tentacle)
Jan 31 05:53:26 compute-0 ceph-mon[75251]: election_strategy: 1
Jan 31 05:53:26 compute-0 ceph-mon[75251]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 05:53:26 compute-0 ceph-mon[75251]: fsmap 
Jan 31 05:53:26 compute-0 ceph-mon[75251]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 05:53:26 compute-0 ceph-mon[75251]: mgrmap e1: no daemons active
Jan 31 05:53:26 compute-0 systemd[1]: Started libpod-conmon-89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f.scope.
Jan 31 05:53:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e56077b4c642bb9161fb1e95caa6a1ce2f2c252931bfc9c8d395482cc8fe046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e56077b4c642bb9161fb1e95caa6a1ce2f2c252931bfc9c8d395482cc8fe046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e56077b4c642bb9161fb1e95caa6a1ce2f2c252931bfc9c8d395482cc8fe046/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:26 compute-0 podman[75252]: 2026-01-31 05:53:26.880211743 +0000 UTC m=+0.028816481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:26 compute-0 podman[75252]: 2026-01-31 05:53:26.985314024 +0000 UTC m=+0.133918772 container init 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:26 compute-0 podman[75252]: 2026-01-31 05:53:26.992267859 +0000 UTC m=+0.140872537 container start 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:53:26 compute-0 podman[75252]: 2026-01-31 05:53:26.99620366 +0000 UTC m=+0.144808378 container attach 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:53:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 31 05:53:27 compute-0 systemd[1]: libpod-89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f.scope: Deactivated successfully.
Jan 31 05:53:27 compute-0 podman[75333]: 2026-01-31 05:53:27.310746753 +0000 UTC m=+0.033049149 container died 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 05:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e56077b4c642bb9161fb1e95caa6a1ce2f2c252931bfc9c8d395482cc8fe046-merged.mount: Deactivated successfully.
Jan 31 05:53:27 compute-0 podman[75333]: 2026-01-31 05:53:27.42209937 +0000 UTC m=+0.144401796 container remove 89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f (image=quay.io/ceph/ceph:v20, name=flamboyant_herschel, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:27 compute-0 systemd[1]: libpod-conmon-89e2c5741cb229572e9798d933343830fd5dd8c116d576b0d0dda5970643100f.scope: Deactivated successfully.
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.499957977 +0000 UTC m=+0.051914639 container create f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:53:27 compute-0 systemd[1]: Started libpod-conmon-f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869.scope.
Jan 31 05:53:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a70787387df1fa28d8ff93e81e0f1bc6307bba50275788441bdaa9d13e56a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a70787387df1fa28d8ff93e81e0f1bc6307bba50275788441bdaa9d13e56a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9a70787387df1fa28d8ff93e81e0f1bc6307bba50275788441bdaa9d13e56a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.478691049 +0000 UTC m=+0.030647801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.598180555 +0000 UTC m=+0.150137277 container init f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.602884917 +0000 UTC m=+0.154841619 container start f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.60653905 +0000 UTC m=+0.158495772 container attach f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:53:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 31 05:53:27 compute-0 systemd[1]: libpod-f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869.scope: Deactivated successfully.
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.814282454 +0000 UTC m=+0.366239186 container died f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b9a70787387df1fa28d8ff93e81e0f1bc6307bba50275788441bdaa9d13e56a-merged.mount: Deactivated successfully.
Jan 31 05:53:27 compute-0 podman[75348]: 2026-01-31 05:53:27.859142113 +0000 UTC m=+0.411098805 container remove f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869 (image=quay.io/ceph/ceph:v20, name=gracious_curran, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:27 compute-0 systemd[1]: libpod-conmon-f03c54634378c74b5bae5910b5e3194db20b7f01b0bdd46c54f9afdc1c555869.scope: Deactivated successfully.
Jan 31 05:53:27 compute-0 systemd[1]: Reloading.
Jan 31 05:53:27 compute-0 systemd-sysv-generator[75434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:27 compute-0 systemd-rc-local-generator[75428]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:28 compute-0 systemd[1]: Reloading.
Jan 31 05:53:28 compute-0 systemd-rc-local-generator[75465]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:53:28 compute-0 systemd-sysv-generator[75468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:53:28 compute-0 systemd[1]: Starting Ceph mgr.compute-0.vavqfa for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:53:28 compute-0 podman[75530]: 2026-01-31 05:53:28.495087562 +0000 UTC m=+0.033173142 container create f894eac925418877bff9990bccfe45c7567afa78215ec8d7c577e6517ad5c623 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cc1a3d84b9f94f75a115ecc773d7706e70f69f5f55321d0f23e6c00ec95f08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cc1a3d84b9f94f75a115ecc773d7706e70f69f5f55321d0f23e6c00ec95f08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cc1a3d84b9f94f75a115ecc773d7706e70f69f5f55321d0f23e6c00ec95f08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cc1a3d84b9f94f75a115ecc773d7706e70f69f5f55321d0f23e6c00ec95f08/merged/var/lib/ceph/mgr/ceph-compute-0.vavqfa supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 podman[75530]: 2026-01-31 05:53:28.534923941 +0000 UTC m=+0.073009531 container init f894eac925418877bff9990bccfe45c7567afa78215ec8d7c577e6517ad5c623 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:53:28 compute-0 podman[75530]: 2026-01-31 05:53:28.542125663 +0000 UTC m=+0.080211263 container start f894eac925418877bff9990bccfe45c7567afa78215ec8d7c577e6517ad5c623 (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:28 compute-0 bash[75530]: f894eac925418877bff9990bccfe45c7567afa78215ec8d7c577e6517ad5c623
Jan 31 05:53:28 compute-0 podman[75530]: 2026-01-31 05:53:28.480066211 +0000 UTC m=+0.018151801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:28 compute-0 systemd[1]: Started Ceph mgr.compute-0.vavqfa for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: pidfile_write: ignore empty --pid-file
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.608989281 +0000 UTC m=+0.041056164 container create 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'alerts'
Jan 31 05:53:28 compute-0 systemd[1]: Started libpod-conmon-427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9.scope.
Jan 31 05:53:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15a0f19cd81190c2406618be13d35f68d7dfba00c4ae7960dfbf21336ead8d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15a0f19cd81190c2406618be13d35f68d7dfba00c4ae7960dfbf21336ead8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15a0f19cd81190c2406618be13d35f68d7dfba00c4ae7960dfbf21336ead8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.592135128 +0000 UTC m=+0.024202021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.700731137 +0000 UTC m=+0.132798030 container init 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.708430694 +0000 UTC m=+0.140497567 container start 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.713024393 +0000 UTC m=+0.145091276 container attach 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'balancer'
Jan 31 05:53:28 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'cephadm'
Jan 31 05:53:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 05:53:28 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009694289' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]: 
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]: {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "health": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "status": "HEALTH_OK",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "checks": {},
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "mutes": []
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "election_epoch": 5,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "quorum": [
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         0
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     ],
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "quorum_names": [
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "compute-0"
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     ],
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "quorum_age": 2,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "monmap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "epoch": 1,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "min_mon_release_name": "tentacle",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_mons": 1
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "osdmap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "epoch": 1,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_osds": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_up_osds": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "osd_up_since": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_in_osds": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "osd_in_since": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_remapped_pgs": 0
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "pgmap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "pgs_by_state": [],
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_pgs": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_pools": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_objects": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "data_bytes": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "bytes_used": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "bytes_avail": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "bytes_total": 0
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "fsmap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "epoch": 1,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "btime": "2026-01-31T05:53:24:897973+0000",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "by_rank": [],
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "up:standby": 0
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "mgrmap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "available": false,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "num_standbys": 0,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "modules": [
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:             "iostat",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:             "nfs"
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         ],
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "services": {}
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "servicemap": {
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "epoch": 1,
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "modified": "2026-01-31T05:53:24.900966+0000",
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:         "services": {}
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     },
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]:     "progress_events": {}
Jan 31 05:53:28 compute-0 hardcore_grothendieck[75588]: }
Jan 31 05:53:28 compute-0 systemd[1]: libpod-427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9.scope: Deactivated successfully.
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.923979267 +0000 UTC m=+0.356046180 container died 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a15a0f19cd81190c2406618be13d35f68d7dfba00c4ae7960dfbf21336ead8d-merged.mount: Deactivated successfully.
Jan 31 05:53:28 compute-0 podman[75551]: 2026-01-31 05:53:28.955206964 +0000 UTC m=+0.387273837 container remove 427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9 (image=quay.io/ceph/ceph:v20, name=hardcore_grothendieck, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:53:28 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3009694289' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:28 compute-0 systemd[1]: libpod-conmon-427a4048600bef53e28719ebbe5533655158f6d8f0b21578c9c3c11e82142cd9.scope: Deactivated successfully.
Jan 31 05:53:29 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'crash'
Jan 31 05:53:29 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'dashboard'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'devicehealth'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 05:53:30 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 05:53:30 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 05:53:30 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]:   from numpy import show_config as show_numpy_config
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'influx'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'insights'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'iostat'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'k8sevents'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'localpool'
Jan 31 05:53:30 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.019586746 +0000 UTC m=+0.042916157 container create 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:31 compute-0 systemd[1]: Started libpod-conmon-5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548.scope.
Jan 31 05:53:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff8e8d30bec92b77db4b2fe08f76d998cb4de1d9d759c6e2bb1be703ffea60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff8e8d30bec92b77db4b2fe08f76d998cb4de1d9d759c6e2bb1be703ffea60b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff8e8d30bec92b77db4b2fe08f76d998cb4de1d9d759c6e2bb1be703ffea60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:30.999375328 +0000 UTC m=+0.022704769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.097621957 +0000 UTC m=+0.120951378 container init 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.10270192 +0000 UTC m=+0.126031321 container start 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.108957305 +0000 UTC m=+0.132286776 container attach 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'mirroring'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'nfs'
Jan 31 05:53:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 05:53:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622405645' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:31 compute-0 goofy_bassi[75655]: 
Jan 31 05:53:31 compute-0 goofy_bassi[75655]: {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "health": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "status": "HEALTH_OK",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "checks": {},
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "mutes": []
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "election_epoch": 5,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "quorum": [
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         0
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     ],
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "quorum_names": [
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "compute-0"
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     ],
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "quorum_age": 4,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "monmap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "epoch": 1,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "min_mon_release_name": "tentacle",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_mons": 1
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "osdmap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "epoch": 1,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_osds": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_up_osds": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "osd_up_since": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_in_osds": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "osd_in_since": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_remapped_pgs": 0
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "pgmap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "pgs_by_state": [],
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_pgs": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_pools": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_objects": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "data_bytes": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "bytes_used": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "bytes_avail": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "bytes_total": 0
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "fsmap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "epoch": 1,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "btime": "2026-01-31T05:53:24:897973+0000",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "by_rank": [],
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "up:standby": 0
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "mgrmap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "available": false,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "num_standbys": 0,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "modules": [
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:             "iostat",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:             "nfs"
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         ],
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "services": {}
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "servicemap": {
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "epoch": 1,
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "modified": "2026-01-31T05:53:24.900966+0000",
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:         "services": {}
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     },
Jan 31 05:53:31 compute-0 goofy_bassi[75655]:     "progress_events": {}
Jan 31 05:53:31 compute-0 goofy_bassi[75655]: }
Jan 31 05:53:31 compute-0 systemd[1]: libpod-5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548.scope: Deactivated successfully.
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.301285186 +0000 UTC m=+0.324614577 container died 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ff8e8d30bec92b77db4b2fe08f76d998cb4de1d9d759c6e2bb1be703ffea60b-merged.mount: Deactivated successfully.
Jan 31 05:53:31 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1622405645' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:31 compute-0 podman[75638]: 2026-01-31 05:53:31.34806415 +0000 UTC m=+0.371393591 container remove 5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548 (image=quay.io/ceph/ceph:v20, name=goofy_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:31 compute-0 systemd[1]: libpod-conmon-5fd3259d49f5cee80b00bdd60f530e235bcb3d41b37aeeb34d076197e7080548.scope: Deactivated successfully.
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'orchestrator'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'osd_support'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'progress'
Jan 31 05:53:31 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'prometheus'
Jan 31 05:53:32 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rbd_support'
Jan 31 05:53:32 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rgw'
Jan 31 05:53:32 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rook'
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'selftest'
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'smb'
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'snap_schedule'
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.419617484 +0000 UTC m=+0.055076648 container create 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:33 compute-0 systemd[1]: Started libpod-conmon-951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a.scope.
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.38567258 +0000 UTC m=+0.021131784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'stats'
Jan 31 05:53:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aa0db2f53a4e9f573a09d8825d63b17a2aa5da87531036410e1b2e4c9738ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aa0db2f53a4e9f573a09d8825d63b17a2aa5da87531036410e1b2e4c9738ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aa0db2f53a4e9f573a09d8825d63b17a2aa5da87531036410e1b2e4c9738ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.501270437 +0000 UTC m=+0.136729601 container init 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.504868838 +0000 UTC m=+0.140328002 container start 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.508739546 +0000 UTC m=+0.144198700 container attach 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'status'
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'telegraf'
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'telemetry'
Jan 31 05:53:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 05:53:33 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519171172' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:33 compute-0 jovial_williamson[75711]: 
Jan 31 05:53:33 compute-0 jovial_williamson[75711]: {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "health": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "status": "HEALTH_OK",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "checks": {},
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "mutes": []
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "election_epoch": 5,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "quorum": [
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         0
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     ],
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "quorum_names": [
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "compute-0"
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     ],
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "quorum_age": 6,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "monmap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "epoch": 1,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "min_mon_release_name": "tentacle",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_mons": 1
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "osdmap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "epoch": 1,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_osds": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_up_osds": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "osd_up_since": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_in_osds": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "osd_in_since": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_remapped_pgs": 0
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "pgmap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "pgs_by_state": [],
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_pgs": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_pools": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_objects": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "data_bytes": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "bytes_used": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "bytes_avail": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "bytes_total": 0
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "fsmap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "epoch": 1,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "btime": "2026-01-31T05:53:24:897973+0000",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "by_rank": [],
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "up:standby": 0
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "mgrmap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "available": false,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "num_standbys": 0,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "modules": [
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:             "iostat",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:             "nfs"
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         ],
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "services": {}
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "servicemap": {
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "epoch": 1,
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "modified": "2026-01-31T05:53:24.900966+0000",
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:         "services": {}
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     },
Jan 31 05:53:33 compute-0 jovial_williamson[75711]:     "progress_events": {}
Jan 31 05:53:33 compute-0 jovial_williamson[75711]: }
Jan 31 05:53:33 compute-0 systemd[1]: libpod-951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a.scope: Deactivated successfully.
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.694877534 +0000 UTC m=+0.330336698 container died 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3aa0db2f53a4e9f573a09d8825d63b17a2aa5da87531036410e1b2e4c9738ae-merged.mount: Deactivated successfully.
Jan 31 05:53:33 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3519171172' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:33 compute-0 podman[75694]: 2026-01-31 05:53:33.748286924 +0000 UTC m=+0.383746078 container remove 951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a (image=quay.io/ceph/ceph:v20, name=jovial_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:33 compute-0 systemd[1]: libpod-conmon-951ba36e601b8ca516b39e3a376b9cea31efb139d529db59a987826983df0a7a.scope: Deactivated successfully.
Jan 31 05:53:33 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'volumes'
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: ms_deliver_dispatch: unhandled message 0x55b321569a00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vavqfa
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr handle_mgr_map Activating!
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr handle_mgr_map I am now activating
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vavqfa(active, starting, since 0.0150139s)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: balancer
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: crash
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Manager daemon compute-0.vavqfa is now available
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer INFO root] Starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: devicehealth
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:53:34
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [balancer INFO root] No pools available
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: iostat
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: nfs
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: orchestrator
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: pg_autoscaler
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: progress
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [progress INFO root] Loading...
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [progress INFO root] No stored events to load
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [progress INFO root] Loaded [] historic events
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] recovery thread starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] starting setup
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: rbd_support
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: status
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: telemetry
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] PerfHandler: starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TaskHandler: starting
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: [rbd_support INFO root] setup complete
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:34 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: volumes
Jan 31 05:53:34 compute-0 ceph-mon[75251]: Activating manager daemon compute-0.vavqfa
Jan 31 05:53:34 compute-0 ceph-mon[75251]: mgrmap e2: compute-0.vavqfa(active, starting, since 0.0150139s)
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: Manager daemon compute-0.vavqfa is now available
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} : dispatch
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:34 compute-0 ceph-mon[75251]: from='mgr.14102 192.168.122.100:0/3166511016' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:35 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vavqfa(active, since 1.02984s)
Jan 31 05:53:35 compute-0 podman[75827]: 2026-01-31 05:53:35.825165737 +0000 UTC m=+0.056172348 container create 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:53:35 compute-0 systemd[1]: Started libpod-conmon-7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226.scope.
Jan 31 05:53:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:35 compute-0 podman[75827]: 2026-01-31 05:53:35.800214557 +0000 UTC m=+0.031221208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e254effde1a7932ec0a03517586d3516f9422bd0f8e5272a1e590b0d509a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e254effde1a7932ec0a03517586d3516f9422bd0f8e5272a1e590b0d509a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e254effde1a7932ec0a03517586d3516f9422bd0f8e5272a1e590b0d509a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:35 compute-0 podman[75827]: 2026-01-31 05:53:35.922597773 +0000 UTC m=+0.153604344 container init 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:53:35 compute-0 podman[75827]: 2026-01-31 05:53:35.928714595 +0000 UTC m=+0.159721166 container start 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:53:35 compute-0 podman[75827]: 2026-01-31 05:53:35.934415585 +0000 UTC m=+0.165422156 container attach 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:53:36 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:36 compute-0 ceph-mon[75251]: mgrmap e3: compute-0.vavqfa(active, since 1.02984s)
Jan 31 05:53:36 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:36 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vavqfa(active, since 2s)
Jan 31 05:53:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 05:53:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291095654' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]: 
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]: {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "health": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "status": "HEALTH_OK",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "checks": {},
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "mutes": []
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "election_epoch": 5,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "quorum": [
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         0
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     ],
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "quorum_names": [
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "compute-0"
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     ],
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "quorum_age": 9,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "monmap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "epoch": 1,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "min_mon_release_name": "tentacle",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_mons": 1
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "osdmap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "epoch": 1,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_osds": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_up_osds": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "osd_up_since": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_in_osds": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "osd_in_since": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_remapped_pgs": 0
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "pgmap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "pgs_by_state": [],
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_pgs": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_pools": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_objects": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "data_bytes": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "bytes_used": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "bytes_avail": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "bytes_total": 0
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "fsmap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "epoch": 1,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "btime": "2026-01-31T05:53:24:897973+0000",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "by_rank": [],
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "up:standby": 0
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "mgrmap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "available": true,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "num_standbys": 0,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "modules": [
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:             "iostat",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:             "nfs"
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         ],
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "services": {}
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "servicemap": {
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "epoch": 1,
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "modified": "2026-01-31T05:53:24.900966+0000",
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:         "services": {}
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     },
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]:     "progress_events": {}
Jan 31 05:53:36 compute-0 upbeat_cannon[75843]: }
Jan 31 05:53:36 compute-0 systemd[1]: libpod-7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226.scope: Deactivated successfully.
Jan 31 05:53:36 compute-0 podman[75827]: 2026-01-31 05:53:36.496927212 +0000 UTC m=+0.727933813 container died 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28e254effde1a7932ec0a03517586d3516f9422bd0f8e5272a1e590b0d509a8-merged.mount: Deactivated successfully.
Jan 31 05:53:36 compute-0 podman[75827]: 2026-01-31 05:53:36.567524875 +0000 UTC m=+0.798531446 container remove 7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226 (image=quay.io/ceph/ceph:v20, name=upbeat_cannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:53:36 compute-0 systemd[1]: libpod-conmon-7d1d374280828532e10a6738a7ec694b1d3d595dc3b201bc82607b6d87d8a226.scope: Deactivated successfully.
Jan 31 05:53:36 compute-0 podman[75882]: 2026-01-31 05:53:36.651950435 +0000 UTC m=+0.062877966 container create f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:36 compute-0 systemd[1]: Started libpod-conmon-f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380.scope.
Jan 31 05:53:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc669e9bec4393dc6a35ba91ec71304654f47cd3bac4d3ed165e21a678ef97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc669e9bec4393dc6a35ba91ec71304654f47cd3bac4d3ed165e21a678ef97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc669e9bec4393dc6a35ba91ec71304654f47cd3bac4d3ed165e21a678ef97/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4cc669e9bec4393dc6a35ba91ec71304654f47cd3bac4d3ed165e21a678ef97/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:36 compute-0 podman[75882]: 2026-01-31 05:53:36.622512799 +0000 UTC m=+0.033440400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:36 compute-0 podman[75882]: 2026-01-31 05:53:36.753674291 +0000 UTC m=+0.164601852 container init f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:36 compute-0 podman[75882]: 2026-01-31 05:53:36.757804547 +0000 UTC m=+0.168732078 container start f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:53:36 compute-0 podman[75882]: 2026-01-31 05:53:36.762489459 +0000 UTC m=+0.173416980 container attach f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 05:53:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 05:53:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3779166200' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:53:37 compute-0 great_napier[75898]: 
Jan 31 05:53:37 compute-0 great_napier[75898]: [global]
Jan 31 05:53:37 compute-0 great_napier[75898]:         fsid = 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:37 compute-0 great_napier[75898]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 05:53:37 compute-0 great_napier[75898]:         osd_crush_chooseleaf_type = 0
Jan 31 05:53:37 compute-0 systemd[1]: libpod-f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380.scope: Deactivated successfully.
Jan 31 05:53:37 compute-0 podman[75882]: 2026-01-31 05:53:37.16954898 +0000 UTC m=+0.580476491 container died f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4cc669e9bec4393dc6a35ba91ec71304654f47cd3bac4d3ed165e21a678ef97-merged.mount: Deactivated successfully.
Jan 31 05:53:37 compute-0 podman[75882]: 2026-01-31 05:53:37.211303203 +0000 UTC m=+0.622230734 container remove f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380 (image=quay.io/ceph/ceph:v20, name=great_napier, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:37 compute-0 systemd[1]: libpod-conmon-f2a3c8ef7f66394fc636ca13a6a123df580306149ea10cd7b7c7b47e191b1380.scope: Deactivated successfully.
Jan 31 05:53:37 compute-0 podman[75937]: 2026-01-31 05:53:37.260524265 +0000 UTC m=+0.037791192 container create 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:53:37 compute-0 systemd[1]: Started libpod-conmon-86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a.scope.
Jan 31 05:53:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04737f0cbbb4f6a7a8b70a1c0961b031f0b8545b1523259e27c4187efb003564/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04737f0cbbb4f6a7a8b70a1c0961b031f0b8545b1523259e27c4187efb003564/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04737f0cbbb4f6a7a8b70a1c0961b031f0b8545b1523259e27c4187efb003564/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:37 compute-0 podman[75937]: 2026-01-31 05:53:37.325720866 +0000 UTC m=+0.102987883 container init 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:53:37 compute-0 podman[75937]: 2026-01-31 05:53:37.333214176 +0000 UTC m=+0.110481143 container start 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:37 compute-0 podman[75937]: 2026-01-31 05:53:37.238534727 +0000 UTC m=+0.015801674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:37 compute-0 ceph-mon[75251]: mgrmap e4: compute-0.vavqfa(active, since 2s)
Jan 31 05:53:37 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4291095654' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 05:53:37 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3779166200' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:53:37 compute-0 podman[75937]: 2026-01-31 05:53:37.340449199 +0000 UTC m=+0.117716216 container attach 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 31 05:53:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3051668936' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3051668936' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 05:53:38 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3051668936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  1: '-n'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  2: 'mgr.compute-0.vavqfa'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  3: '-f'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  4: '--setuser'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  5: 'ceph'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  6: '--setgroup'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  7: 'ceph'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr respawn  exe_path /proc/self/exe
Jan 31 05:53:38 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vavqfa(active, since 4s)
Jan 31 05:53:38 compute-0 systemd[1]: libpod-86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a.scope: Deactivated successfully.
Jan 31 05:53:38 compute-0 podman[75937]: 2026-01-31 05:53:38.50704126 +0000 UTC m=+1.284308257 container died 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-04737f0cbbb4f6a7a8b70a1c0961b031f0b8545b1523259e27c4187efb003564-merged.mount: Deactivated successfully.
Jan 31 05:53:38 compute-0 podman[75937]: 2026-01-31 05:53:38.549734219 +0000 UTC m=+1.327001196 container remove 86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a (image=quay.io/ceph/ceph:v20, name=clever_tesla, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:38 compute-0 systemd[1]: libpod-conmon-86d1bda062ca926dbba7fda0c8c43a549ee2db314d4ca9b1972db77323d32d3a.scope: Deactivated successfully.
Jan 31 05:53:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: ignoring --setuser ceph since I am not root
Jan 31 05:53:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: ignoring --setgroup ceph since I am not root
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: pidfile_write: ignore empty --pid-file
Jan 31 05:53:38 compute-0 podman[75993]: 2026-01-31 05:53:38.60495918 +0000 UTC m=+0.038555564 container create 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'alerts'
Jan 31 05:53:38 compute-0 systemd[1]: Started libpod-conmon-66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b.scope.
Jan 31 05:53:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8228787ca97cec78f0d9b7b0bfa1dcc38c9bb9ffcfc72a54227899cfbaab84b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8228787ca97cec78f0d9b7b0bfa1dcc38c9bb9ffcfc72a54227899cfbaab84b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8228787ca97cec78f0d9b7b0bfa1dcc38c9bb9ffcfc72a54227899cfbaab84b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:38 compute-0 podman[75993]: 2026-01-31 05:53:38.675270205 +0000 UTC m=+0.108866609 container init 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:38 compute-0 podman[75993]: 2026-01-31 05:53:38.682280711 +0000 UTC m=+0.115877095 container start 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:53:38 compute-0 podman[75993]: 2026-01-31 05:53:38.685925504 +0000 UTC m=+0.119521938 container attach 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:53:38 compute-0 podman[75993]: 2026-01-31 05:53:38.591189713 +0000 UTC m=+0.024786127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'balancer'
Jan 31 05:53:38 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'cephadm'
Jan 31 05:53:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 05:53:39 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883890351' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 05:53:39 compute-0 youthful_williams[76029]: {
Jan 31 05:53:39 compute-0 youthful_williams[76029]:     "epoch": 5,
Jan 31 05:53:39 compute-0 youthful_williams[76029]:     "available": true,
Jan 31 05:53:39 compute-0 youthful_williams[76029]:     "active_name": "compute-0.vavqfa",
Jan 31 05:53:39 compute-0 youthful_williams[76029]:     "num_standby": 0
Jan 31 05:53:39 compute-0 youthful_williams[76029]: }
Jan 31 05:53:39 compute-0 systemd[1]: libpod-66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b.scope: Deactivated successfully.
Jan 31 05:53:39 compute-0 podman[75993]: 2026-01-31 05:53:39.169410951 +0000 UTC m=+0.603007345 container died 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8228787ca97cec78f0d9b7b0bfa1dcc38c9bb9ffcfc72a54227899cfbaab84b6-merged.mount: Deactivated successfully.
Jan 31 05:53:39 compute-0 podman[75993]: 2026-01-31 05:53:39.211162264 +0000 UTC m=+0.644758648 container remove 66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b (image=quay.io/ceph/ceph:v20, name=youthful_williams, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:53:39 compute-0 systemd[1]: libpod-conmon-66b03b03bfb956bbad2cb7e1dfd2d43e4c377d3485851c72a32c80afcb81530b.scope: Deactivated successfully.
Jan 31 05:53:39 compute-0 podman[76078]: 2026-01-31 05:53:39.28688382 +0000 UTC m=+0.053920985 container create 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:39 compute-0 systemd[1]: Started libpod-conmon-8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd.scope.
Jan 31 05:53:39 compute-0 podman[76078]: 2026-01-31 05:53:39.259751628 +0000 UTC m=+0.026788883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83be804ff96a4e0ab818d5dd47833f1c00fc0a0bac7d10230c4a64af14331585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83be804ff96a4e0ab818d5dd47833f1c00fc0a0bac7d10230c4a64af14331585/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83be804ff96a4e0ab818d5dd47833f1c00fc0a0bac7d10230c4a64af14331585/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:39 compute-0 podman[76078]: 2026-01-31 05:53:39.393570986 +0000 UTC m=+0.160608231 container init 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:39 compute-0 podman[76078]: 2026-01-31 05:53:39.401295593 +0000 UTC m=+0.168332798 container start 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:39 compute-0 podman[76078]: 2026-01-31 05:53:39.406701895 +0000 UTC m=+0.173739070 container attach 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:53:39 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3051668936' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 05:53:39 compute-0 ceph-mon[75251]: mgrmap e5: compute-0.vavqfa(active, since 4s)
Jan 31 05:53:39 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/883890351' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 05:53:39 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'crash'
Jan 31 05:53:39 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'dashboard'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'devicehealth'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 05:53:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 05:53:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 05:53:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]:   from numpy import show_config as show_numpy_config
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'influx'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'insights'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'iostat'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'k8sevents'
Jan 31 05:53:40 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'localpool'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'mirroring'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'nfs'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'orchestrator'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'osd_support'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 05:53:41 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'progress'
Jan 31 05:53:42 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'prometheus'
Jan 31 05:53:42 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rbd_support'
Jan 31 05:53:42 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rgw'
Jan 31 05:53:42 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'rook'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'selftest'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'smb'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'snap_schedule'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'stats'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'status'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'telegraf'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'telemetry'
Jan 31 05:53:43 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: mgr[py] Loading python module 'volumes'
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vavqfa restarted
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vavqfa
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: ms_deliver_dispatch: unhandled message 0x55a049a34000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: mgr handle_mgr_map Activating!
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: mgr handle_mgr_map I am now activating
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vavqfa(active, starting, since 0.0171892s)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} v 0)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: balancer
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Starting
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:53:44
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:53:44 compute-0 ceph-mgr[75550]: [balancer INFO root] No pools available
Jan 31 05:53:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Manager daemon compute-0.vavqfa is now available
Jan 31 05:53:44 compute-0 ceph-mon[75251]: Active manager daemon compute-0.vavqfa restarted
Jan 31 05:53:44 compute-0 ceph-mon[75251]: Activating manager daemon compute-0.vavqfa
Jan 31 05:53:44 compute-0 ceph-mon[75251]: osdmap e2: 0 total, 0 up, 0 in
Jan 31 05:53:44 compute-0 ceph-mon[75251]: mgrmap e6: compute-0.vavqfa(active, starting, since 0.0171892s)
Jan 31 05:53:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr metadata", "who": "compute-0.vavqfa", "id": "compute-0.vavqfa"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 05:53:44 compute-0 ceph-mon[75251]: Manager daemon compute-0.vavqfa is now available
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: cephadm
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: crash
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: devicehealth
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Starting
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: iostat
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: nfs
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: orchestrator
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: pg_autoscaler
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: progress
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [progress INFO root] Loading...
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [progress INFO root] No stored events to load
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [progress INFO root] Loaded [] historic events
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] recovery thread starting
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] starting setup
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: rbd_support
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: status
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} : dispatch
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: telemetry
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] PerfHandler: starting
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TaskHandler: starting
Jan 31 05:53:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} v 0)
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} : dispatch
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] setup complete
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: mgr load Constructed class from module: volumes
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 05:53:45 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vavqfa(active, since 1.0875s)
Jan 31 05:53:45 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 05:53:45 compute-0 compassionate_meitner[76094]: {
Jan 31 05:53:45 compute-0 compassionate_meitner[76094]:     "mgrmap_epoch": 7,
Jan 31 05:53:45 compute-0 compassionate_meitner[76094]:     "initialized": true
Jan 31 05:53:45 compute-0 compassionate_meitner[76094]: }
Jan 31 05:53:45 compute-0 systemd[1]: libpod-8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd.scope: Deactivated successfully.
Jan 31 05:53:45 compute-0 podman[76078]: 2026-01-31 05:53:45.492549049 +0000 UTC m=+6.259586254 container died 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-83be804ff96a4e0ab818d5dd47833f1c00fc0a0bac7d10230c4a64af14331585-merged.mount: Deactivated successfully.
Jan 31 05:53:45 compute-0 podman[76078]: 2026-01-31 05:53:45.839480842 +0000 UTC m=+6.606518047 container remove 8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd (image=quay.io/ceph/ceph:v20, name=compassionate_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:45 compute-0 systemd[1]: libpod-conmon-8564c1cc8622b0d082ce90e89db4bc0f455f3432ac2345e757bf9db3727c1acd.scope: Deactivated successfully.
Jan 31 05:53:45 compute-0 podman[76241]: 2026-01-31 05:53:45.96727094 +0000 UTC m=+0.101690686 container create 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:53:45 compute-0 podman[76241]: 2026-01-31 05:53:45.898373146 +0000 UTC m=+0.032792932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:46 compute-0 systemd[1]: Started libpod-conmon-809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c.scope.
Jan 31 05:53:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d40ec5739b0c0887c2db67010d46971dd8180206d74edc7287231071f991e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d40ec5739b0c0887c2db67010d46971dd8180206d74edc7287231071f991e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe7d40ec5739b0c0887c2db67010d46971dd8180206d74edc7287231071f991e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:46 compute-0 podman[76241]: 2026-01-31 05:53:46.076456197 +0000 UTC m=+0.210875933 container init 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:46 compute-0 podman[76241]: 2026-01-31 05:53:46.082841496 +0000 UTC m=+0.217261222 container start 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:46 compute-0 podman[76241]: 2026-01-31 05:53:46.085866751 +0000 UTC m=+0.220286447 container attach 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:46 compute-0 ceph-mon[75251]: Found migration_current of "None". Setting to last migration.
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/mirror_snapshot_schedule"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vavqfa/trash_purge_schedule"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: mgrmap e7: compute-0.vavqfa(active, since 1.0875s)
Jan 31 05:53:46 compute-0 ceph-mon[75251]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: [cephadm INFO cherrypy.error] [31/Jan/2026:05:53:46] ENGINE Bus STARTING
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : [31/Jan/2026:05:53:46] ENGINE Bus STARTING
Jan 31 05:53:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 31 05:53:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/148627781' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: [cephadm INFO cherrypy.error] [31/Jan/2026:05:53:46] ENGINE Serving on https://192.168.122.100:7150
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : [31/Jan/2026:05:53:46] ENGINE Serving on https://192.168.122.100:7150
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: [cephadm INFO cherrypy.error] [31/Jan/2026:05:53:46] ENGINE Client ('192.168.122.100', 59806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : [31/Jan/2026:05:53:46] ENGINE Client ('192.168.122.100', 59806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: [cephadm INFO cherrypy.error] [31/Jan/2026:05:53:46] ENGINE Serving on http://192.168.122.100:8765
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : [31/Jan/2026:05:53:46] ENGINE Serving on http://192.168.122.100:8765
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: [cephadm INFO cherrypy.error] [31/Jan/2026:05:53:46] ENGINE Bus STARTED
Jan 31 05:53:46 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : [31/Jan/2026:05:53:46] ENGINE Bus STARTED
Jan 31 05:53:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:53:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019901458 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:53:47 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:47 compute-0 ceph-mon[75251]: [31/Jan/2026:05:53:46] ENGINE Bus STARTING
Jan 31 05:53:47 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/148627781' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 05:53:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:48 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/148627781' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 05:53:48 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vavqfa(active, since 3s)
Jan 31 05:53:48 compute-0 nervous_almeida[76257]: module 'orchestrator' is already enabled (always-on)
Jan 31 05:53:48 compute-0 systemd[1]: libpod-809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c.scope: Deactivated successfully.
Jan 31 05:53:48 compute-0 podman[76241]: 2026-01-31 05:53:48.266800646 +0000 UTC m=+2.401220382 container died 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:48 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe7d40ec5739b0c0887c2db67010d46971dd8180206d74edc7287231071f991e-merged.mount: Deactivated successfully.
Jan 31 05:53:48 compute-0 podman[76241]: 2026-01-31 05:53:48.821617436 +0000 UTC m=+2.956037162 container remove 809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c (image=quay.io/ceph/ceph:v20, name=nervous_almeida, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:53:48 compute-0 systemd[1]: libpod-conmon-809c8643c90aca95ad1888b3a150592cb304e4baea8e55a8e964418c1064a62c.scope: Deactivated successfully.
Jan 31 05:53:48 compute-0 podman[76319]: 2026-01-31 05:53:48.896347494 +0000 UTC m=+0.053155143 container create a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:48 compute-0 systemd[1]: Started libpod-conmon-a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142.scope.
Jan 31 05:53:48 compute-0 ceph-mon[75251]: [31/Jan/2026:05:53:46] ENGINE Serving on https://192.168.122.100:7150
Jan 31 05:53:48 compute-0 ceph-mon[75251]: [31/Jan/2026:05:53:46] ENGINE Client ('192.168.122.100', 59806) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 05:53:48 compute-0 ceph-mon[75251]: [31/Jan/2026:05:53:46] ENGINE Serving on http://192.168.122.100:8765
Jan 31 05:53:48 compute-0 ceph-mon[75251]: [31/Jan/2026:05:53:46] ENGINE Bus STARTED
Jan 31 05:53:48 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/148627781' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 05:53:48 compute-0 ceph-mon[75251]: mgrmap e8: compute-0.vavqfa(active, since 3s)
Jan 31 05:53:48 compute-0 podman[76319]: 2026-01-31 05:53:48.876359323 +0000 UTC m=+0.033166972 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cec93a00a652e124a259559258475ca578972981d315c9527baa9fa149a92a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cec93a00a652e124a259559258475ca578972981d315c9527baa9fa149a92a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cec93a00a652e124a259559258475ca578972981d315c9527baa9fa149a92a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:49 compute-0 podman[76319]: 2026-01-31 05:53:49.011291532 +0000 UTC m=+0.168099241 container init a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:49 compute-0 podman[76319]: 2026-01-31 05:53:49.017706312 +0000 UTC m=+0.174513961 container start a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:53:49 compute-0 podman[76319]: 2026-01-31 05:53:49.024079491 +0000 UTC m=+0.180887150 container attach a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:53:49 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:49 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 31 05:53:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:53:49 compute-0 systemd[1]: libpod-a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142.scope: Deactivated successfully.
Jan 31 05:53:49 compute-0 podman[76319]: 2026-01-31 05:53:49.459821677 +0000 UTC m=+0.616629376 container died a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:53:49 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cec93a00a652e124a259559258475ca578972981d315c9527baa9fa149a92a2-merged.mount: Deactivated successfully.
Jan 31 05:53:49 compute-0 podman[76319]: 2026-01-31 05:53:49.504311287 +0000 UTC m=+0.661118936 container remove a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142 (image=quay.io/ceph/ceph:v20, name=practical_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:49 compute-0 systemd[1]: libpod-conmon-a8a6e539d13c1d11a759e2a240415ba234a978e5dfc2a53981813946068c8142.scope: Deactivated successfully.
Jan 31 05:53:49 compute-0 podman[76374]: 2026-01-31 05:53:49.585380243 +0000 UTC m=+0.061548839 container create 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:53:49 compute-0 systemd[1]: Started libpod-conmon-9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6.scope.
Jan 31 05:53:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995313943b8bf4f0b298da2d94ef50f4eedbdb8d7bde88d8e7d5f15a0d9ac47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995313943b8bf4f0b298da2d94ef50f4eedbdb8d7bde88d8e7d5f15a0d9ac47/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0995313943b8bf4f0b298da2d94ef50f4eedbdb8d7bde88d8e7d5f15a0d9ac47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:49 compute-0 podman[76374]: 2026-01-31 05:53:49.560490574 +0000 UTC m=+0.036659220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:49 compute-0 podman[76374]: 2026-01-31 05:53:49.66932683 +0000 UTC m=+0.145495476 container init 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:53:49 compute-0 podman[76374]: 2026-01-31 05:53:49.676476111 +0000 UTC m=+0.152644707 container start 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:49 compute-0 podman[76374]: 2026-01-31 05:53:49.68104955 +0000 UTC m=+0.157218176 container attach 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 31 05:53:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: [cephadm INFO root] Set ssh ssh_user
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 05:53:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 31 05:53:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: [cephadm INFO root] Set ssh ssh_config
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 05:53:50 compute-0 pedantic_jennings[76391]: ssh user set to ceph-admin. sudo will be used
Jan 31 05:53:50 compute-0 systemd[1]: libpod-9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6.scope: Deactivated successfully.
Jan 31 05:53:50 compute-0 conmon[76391]: conmon 9769bbc431fd9612cb78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6.scope/container/memory.events
Jan 31 05:53:50 compute-0 podman[76374]: 2026-01-31 05:53:50.151979255 +0000 UTC m=+0.628147861 container died 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0995313943b8bf4f0b298da2d94ef50f4eedbdb8d7bde88d8e7d5f15a0d9ac47-merged.mount: Deactivated successfully.
Jan 31 05:53:50 compute-0 podman[76374]: 2026-01-31 05:53:50.20024604 +0000 UTC m=+0.676414626 container remove 9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6 (image=quay.io/ceph/ceph:v20, name=pedantic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 31 05:53:50 compute-0 systemd[1]: libpod-conmon-9769bbc431fd9612cb7808f44976e4f237e0339f607f6f21218011cec817a5a6.scope: Deactivated successfully.
Jan 31 05:53:50 compute-0 podman[76429]: 2026-01-31 05:53:50.286336198 +0000 UTC m=+0.065266834 container create 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:50 compute-0 systemd[1]: Started libpod-conmon-17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030.scope.
Jan 31 05:53:50 compute-0 podman[76429]: 2026-01-31 05:53:50.258690572 +0000 UTC m=+0.037621258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:50 compute-0 podman[76429]: 2026-01-31 05:53:50.38114872 +0000 UTC m=+0.160079406 container init 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:50 compute-0 podman[76429]: 2026-01-31 05:53:50.397088708 +0000 UTC m=+0.176019364 container start 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:53:50 compute-0 podman[76429]: 2026-01-31 05:53:50.403291102 +0000 UTC m=+0.182221718 container attach 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:50 compute-0 ceph-mon[75251]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 31 05:53:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: [cephadm INFO root] Set ssh private key
Jan 31 05:53:50 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 05:53:50 compute-0 systemd[1]: libpod-17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030.scope: Deactivated successfully.
Jan 31 05:53:50 compute-0 podman[76471]: 2026-01-31 05:53:50.870837742 +0000 UTC m=+0.026099314 container died 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b67808c4943e1d74c6323ab5f514a27c27bf7fdcd338f047c5c6fc9f259884a-merged.mount: Deactivated successfully.
Jan 31 05:53:50 compute-0 podman[76471]: 2026-01-31 05:53:50.910338232 +0000 UTC m=+0.065599764 container remove 17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030 (image=quay.io/ceph/ceph:v20, name=nice_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:53:50 compute-0 systemd[1]: libpod-conmon-17bd190d95581de37754efd336aa4b10df2add8824ea4ec10123025be73a4030.scope: Deactivated successfully.
Jan 31 05:53:50 compute-0 podman[76486]: 2026-01-31 05:53:50.984552355 +0000 UTC m=+0.051892478 container create 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:51 compute-0 systemd[1]: Started libpod-conmon-22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3.scope.
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:50.957639329 +0000 UTC m=+0.024979492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:51.079299516 +0000 UTC m=+0.146639679 container init 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:51.089537451 +0000 UTC m=+0.156877534 container start 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:51.093898963 +0000 UTC m=+0.161239146 container attach 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:53:51 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:51 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 31 05:53:51 compute-0 ceph-mon[75251]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:51 compute-0 ceph-mon[75251]: Set ssh ssh_user
Jan 31 05:53:51 compute-0 ceph-mon[75251]: Set ssh ssh_config
Jan 31 05:53:51 compute-0 ceph-mon[75251]: ssh user set to ceph-admin. sudo will be used
Jan 31 05:53:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:51 compute-0 ceph-mgr[75550]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 05:53:51 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 05:53:51 compute-0 systemd[1]: libpod-22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3.scope: Deactivated successfully.
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:51.696089797 +0000 UTC m=+0.763429920 container died 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b530dd53f5bbb505d4b0e4eb95beb9628f0ab870bd2cd9b9b8d809865282d2b-merged.mount: Deactivated successfully.
Jan 31 05:53:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052579 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:53:51 compute-0 podman[76486]: 2026-01-31 05:53:51.979193684 +0000 UTC m=+1.046533807 container remove 22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3 (image=quay.io/ceph/ceph:v20, name=funny_visvesvaraya, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.027251936 +0000 UTC m=+0.027937609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.170059523 +0000 UTC m=+0.170745196 container create 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:53:52 compute-0 systemd[1]: Started libpod-conmon-6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36.scope.
Jan 31 05:53:52 compute-0 systemd[1]: libpod-conmon-22dd73399eed7887397d6c82c82cc8dafb8373d0d627367ec4f600e9366b95e3.scope: Deactivated successfully.
Jan 31 05:53:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9498a8c2c9448f2ad3785009e5ebb7fdaf40dfa8667a571d8a588341133a3311/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9498a8c2c9448f2ad3785009e5ebb7fdaf40dfa8667a571d8a588341133a3311/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9498a8c2c9448f2ad3785009e5ebb7fdaf40dfa8667a571d8a588341133a3311/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.382069162 +0000 UTC m=+0.382754845 container init 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:52 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.388210817 +0000 UTC m=+0.388896490 container start 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.561592655 +0000 UTC m=+0.562278388 container attach 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:53:52 compute-0 ceph-mon[75251]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:52 compute-0 ceph-mon[75251]: Set ssh ssh_identity_key
Jan 31 05:53:52 compute-0 ceph-mon[75251]: Set ssh private key
Jan 31 05:53:52 compute-0 ceph-mon[75251]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:52 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:52 compute-0 great_spence[76556]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUTLQfZoOaK9/z2Sy2J0axg9ixzsnhW9KNE+Cq4EDOPKWFJpMm55eVeW4GuhlABLM4/UV3d9zPJzQE0SMAK9VCurSeGpqC37Yjw5KoZrJIc6sHMfV82SYdzUaP4mgReUwcS7r7RtBUQYIqtaIjla99MEn9hp/FZGnhEskrghb1L/gKOK8LBoNQpq5PskVRAmo4UPrS9jkkAG/FjpJXV4Dwd9JM2UPgCQ5aHAu96o+pscK0HNMJJz/LJ7vDYLxOKqrW2pTA3cScWJCWD+RIW8iA1uLypj9i8/Cr07EH++MqfXaykGPauLc+Ixqj2++rgTOdzC+vBhi8elTqu9aERRmlL5F7gTlSxRSuS+3cNqiZX8f9UEnXvqdHKOxFo7u6aRDSObSdK9zzc99++60u+tyF2u1G162oIywnoIMg3GAFwgi49OeSd+I0HgaOaGb0GrUkN87eqfP9Gpb5Ytnn149OHzXZ4VSPV90aHEhnKoJZ8dY6qzXrVBTByjjwlvDvOIs= zuul@controller
Jan 31 05:53:52 compute-0 systemd[1]: libpod-6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36.scope: Deactivated successfully.
Jan 31 05:53:52 compute-0 podman[76540]: 2026-01-31 05:53:52.875847832 +0000 UTC m=+0.876533505 container died 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9498a8c2c9448f2ad3785009e5ebb7fdaf40dfa8667a571d8a588341133a3311-merged.mount: Deactivated successfully.
Jan 31 05:53:53 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:53 compute-0 podman[76540]: 2026-01-31 05:53:53.403996955 +0000 UTC m=+1.404682628 container remove 6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36 (image=quay.io/ceph/ceph:v20, name=great_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:53 compute-0 systemd[1]: libpod-conmon-6b08ed9636ae3cfad88753831f653e353c064a6b69bd2caf2140812be8ad1e36.scope: Deactivated successfully.
Jan 31 05:53:53 compute-0 podman[76594]: 2026-01-31 05:53:53.520478901 +0000 UTC m=+0.093608617 container create 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:53:53 compute-0 podman[76594]: 2026-01-31 05:53:53.461455956 +0000 UTC m=+0.034585722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:53 compute-0 ceph-mon[75251]: Set ssh ssh_identity_pub
Jan 31 05:53:53 compute-0 systemd[1]: Started libpod-conmon-49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc.scope.
Jan 31 05:53:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3231d6a7b226d2bc4cf2f1c5ec024da817677092728d926e7f8af6dd14411c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3231d6a7b226d2bc4cf2f1c5ec024da817677092728d926e7f8af6dd14411c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc3231d6a7b226d2bc4cf2f1c5ec024da817677092728d926e7f8af6dd14411c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:53 compute-0 podman[76594]: 2026-01-31 05:53:53.820946186 +0000 UTC m=+0.394075922 container init 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:53:53 compute-0 podman[76594]: 2026-01-31 05:53:53.828099976 +0000 UTC m=+0.401229652 container start 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:53:53 compute-0 podman[76594]: 2026-01-31 05:53:53.950185809 +0000 UTC m=+0.523315495 container attach 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:53:54 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:54 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:54 compute-0 sshd-session[76636]: Accepted publickey for ceph-admin from 192.168.122.100 port 40336 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:54 compute-0 systemd-logind[797]: New session 20 of user ceph-admin.
Jan 31 05:53:54 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 05:53:54 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 05:53:54 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 05:53:54 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 05:53:54 compute-0 systemd[76640]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:54 compute-0 systemd[76640]: Queued start job for default target Main User Target.
Jan 31 05:53:54 compute-0 systemd[76640]: Created slice User Application Slice.
Jan 31 05:53:54 compute-0 systemd[76640]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 05:53:54 compute-0 systemd[76640]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 05:53:54 compute-0 systemd[76640]: Reached target Paths.
Jan 31 05:53:54 compute-0 systemd[76640]: Reached target Timers.
Jan 31 05:53:54 compute-0 systemd[76640]: Starting D-Bus User Message Bus Socket...
Jan 31 05:53:54 compute-0 systemd[76640]: Starting Create User's Volatile Files and Directories...
Jan 31 05:53:54 compute-0 systemd[76640]: Finished Create User's Volatile Files and Directories.
Jan 31 05:53:54 compute-0 systemd[76640]: Listening on D-Bus User Message Bus Socket.
Jan 31 05:53:54 compute-0 systemd[76640]: Reached target Sockets.
Jan 31 05:53:54 compute-0 systemd[76640]: Reached target Basic System.
Jan 31 05:53:54 compute-0 systemd[76640]: Reached target Main User Target.
Jan 31 05:53:54 compute-0 systemd[76640]: Startup finished in 110ms.
Jan 31 05:53:54 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 05:53:54 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Jan 31 05:53:54 compute-0 sshd-session[76636]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:54 compute-0 sshd-session[76653]: Accepted publickey for ceph-admin from 192.168.122.100 port 40348 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:54 compute-0 systemd-logind[797]: New session 22 of user ceph-admin.
Jan 31 05:53:54 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Jan 31 05:53:54 compute-0 sshd-session[76653]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:54 compute-0 sudo[76660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:53:54 compute-0 sudo[76660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:54 compute-0 sudo[76660]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:54 compute-0 ceph-mon[75251]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:54 compute-0 ceph-mon[75251]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:53:54 compute-0 sshd-session[76685]: Accepted publickey for ceph-admin from 192.168.122.100 port 40356 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:54 compute-0 systemd-logind[797]: New session 23 of user ceph-admin.
Jan 31 05:53:55 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 05:53:55 compute-0 sshd-session[76685]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:55 compute-0 sudo[76689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 05:53:55 compute-0 sudo[76689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:55 compute-0 sudo[76689]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:55 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:55 compute-0 sshd-session[76714]: Accepted publickey for ceph-admin from 192.168.122.100 port 40362 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:55 compute-0 systemd-logind[797]: New session 24 of user ceph-admin.
Jan 31 05:53:55 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 05:53:55 compute-0 sshd-session[76714]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:55 compute-0 sudo[76718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 05:53:55 compute-0 sudo[76718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:55 compute-0 sudo[76718]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:55 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 05:53:55 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 05:53:55 compute-0 sshd-session[76743]: Accepted publickey for ceph-admin from 192.168.122.100 port 40368 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:55 compute-0 systemd-logind[797]: New session 25 of user ceph-admin.
Jan 31 05:53:55 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 05:53:55 compute-0 sshd-session[76743]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:55 compute-0 ceph-mon[75251]: Deploying cephadm binary to compute-0
Jan 31 05:53:55 compute-0 sudo[76747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:55 compute-0 sudo[76747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:55 compute-0 sudo[76747]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:56 compute-0 sshd-session[76772]: Accepted publickey for ceph-admin from 192.168.122.100 port 40370 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:56 compute-0 systemd-logind[797]: New session 26 of user ceph-admin.
Jan 31 05:53:56 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 05:53:56 compute-0 sshd-session[76772]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:56 compute-0 sudo[76776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:56 compute-0 sudo[76776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:56 compute-0 sudo[76776]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:56 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:56 compute-0 sshd-session[76801]: Accepted publickey for ceph-admin from 192.168.122.100 port 40380 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:56 compute-0 systemd-logind[797]: New session 27 of user ceph-admin.
Jan 31 05:53:56 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 05:53:56 compute-0 sshd-session[76801]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:56 compute-0 sudo[76805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 05:53:56 compute-0 sudo[76805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:56 compute-0 sudo[76805]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:56 compute-0 sshd-session[76830]: Accepted publickey for ceph-admin from 192.168.122.100 port 40384 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:56 compute-0 systemd-logind[797]: New session 28 of user ceph-admin.
Jan 31 05:53:56 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 05:53:56 compute-0 sshd-session[76830]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:53:56 compute-0 sudo[76834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:53:56 compute-0 sudo[76834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:56 compute-0 sudo[76834]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:57 compute-0 sshd-session[76859]: Accepted publickey for ceph-admin from 192.168.122.100 port 40396 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:57 compute-0 systemd-logind[797]: New session 29 of user ceph-admin.
Jan 31 05:53:57 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 05:53:57 compute-0 sshd-session[76859]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:57 compute-0 sudo[76863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 05:53:57 compute-0 sudo[76863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:57 compute-0 sudo[76863]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:57 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:57 compute-0 sshd-session[76888]: Accepted publickey for ceph-admin from 192.168.122.100 port 40408 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:57 compute-0 systemd-logind[797]: New session 30 of user ceph-admin.
Jan 31 05:53:57 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 05:53:57 compute-0 sshd-session[76888]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:58 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:53:58 compute-0 sshd-session[76915]: Accepted publickey for ceph-admin from 192.168.122.100 port 50182 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:58 compute-0 systemd-logind[797]: New session 31 of user ceph-admin.
Jan 31 05:53:58 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 05:53:58 compute-0 sshd-session[76915]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:58 compute-0 sudo[76919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 05:53:58 compute-0 sudo[76919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:58 compute-0 sudo[76919]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:59 compute-0 sshd-session[76944]: Accepted publickey for ceph-admin from 192.168.122.100 port 50186 ssh2: RSA SHA256:LJSQhKTtiCJgpo69+XW01N/YgjGM2SGoA+1nnDIiXHU
Jan 31 05:53:59 compute-0 systemd-logind[797]: New session 32 of user ceph-admin.
Jan 31 05:53:59 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 05:53:59 compute-0 sshd-session[76944]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 05:53:59 compute-0 sudo[76948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 05:53:59 compute-0 sudo[76948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:59 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:53:59 compute-0 sudo[76948]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:53:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:53:59 compute-0 ceph-mgr[75550]: [cephadm INFO root] Added host compute-0
Jan 31 05:53:59 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 05:53:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:53:59 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:53:59 compute-0 zealous_carver[76610]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 05:53:59 compute-0 systemd[1]: libpod-49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc.scope: Deactivated successfully.
Jan 31 05:53:59 compute-0 podman[76594]: 2026-01-31 05:53:59.574301842 +0000 UTC m=+6.147431548 container died 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc3231d6a7b226d2bc4cf2f1c5ec024da817677092728d926e7f8af6dd14411c-merged.mount: Deactivated successfully.
Jan 31 05:53:59 compute-0 sudo[76993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:53:59 compute-0 sudo[76993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:59 compute-0 sudo[76993]: pam_unix(sudo:session): session closed for user root
Jan 31 05:53:59 compute-0 podman[76594]: 2026-01-31 05:53:59.622320343 +0000 UTC m=+6.195450029 container remove 49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc (image=quay.io/ceph/ceph:v20, name=zealous_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:53:59 compute-0 systemd[1]: libpod-conmon-49f9257fb9199d05456ebacc79c4401adc1d44a66e142c87fe9ac44c0b0d7dcc.scope: Deactivated successfully.
Jan 31 05:53:59 compute-0 sudo[77033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 31 05:53:59 compute-0 sudo[77033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:53:59 compute-0 podman[77036]: 2026-01-31 05:53:59.685819725 +0000 UTC m=+0.044261100 container create 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:53:59 compute-0 systemd[1]: Started libpod-conmon-4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0.scope.
Jan 31 05:53:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f004f3d7b8037593f9c2f6a11daabdc095493ff7108894fd4a216b0fde89195f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f004f3d7b8037593f9c2f6a11daabdc095493ff7108894fd4a216b0fde89195f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f004f3d7b8037593f9c2f6a11daabdc095493ff7108894fd4a216b0fde89195f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:53:59 compute-0 podman[77036]: 2026-01-31 05:53:59.671305627 +0000 UTC m=+0.029747022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:53:59 compute-0 podman[77036]: 2026-01-31 05:53:59.772022442 +0000 UTC m=+0.130463847 container init 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:53:59 compute-0 podman[77036]: 2026-01-31 05:53:59.777918188 +0000 UTC m=+0.136359563 container start 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:53:59 compute-0 podman[77036]: 2026-01-31 05:53:59.78313367 +0000 UTC m=+0.141575095 container attach 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:00 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:00 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 05:54:00 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 05:54:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 05:54:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:00 compute-0 eloquent_meninsky[77074]: Scheduled mon update...
Jan 31 05:54:00 compute-0 systemd[1]: libpod-4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0.scope: Deactivated successfully.
Jan 31 05:54:00 compute-0 podman[77036]: 2026-01-31 05:54:00.254756385 +0000 UTC m=+0.613197800 container died 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 31 05:54:00 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f004f3d7b8037593f9c2f6a11daabdc095493ff7108894fd4a216b0fde89195f-merged.mount: Deactivated successfully.
Jan 31 05:54:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:00 compute-0 ceph-mon[75251]: Added host compute-0
Jan 31 05:54:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:54:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:01 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:01 compute-0 podman[77036]: 2026-01-31 05:54:01.392057555 +0000 UTC m=+1.750498970 container remove 4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0 (image=quay.io/ceph/ceph:v20, name=eloquent_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:54:01 compute-0 podman[77092]: 2026-01-31 05:54:01.395379282 +0000 UTC m=+1.514508472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:01 compute-0 systemd[1]: libpod-conmon-4f66a5d444fcbb4f0ab1092ee80eda9b651d53aed0c7125ef454ec09ed57f1c0.scope: Deactivated successfully.
Jan 31 05:54:01 compute-0 podman[77138]: 2026-01-31 05:54:01.441679672 +0000 UTC m=+0.030059233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:01 compute-0 podman[77138]: 2026-01-31 05:54:01.702812859 +0000 UTC m=+0.291192370 container create 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 05:54:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:02 compute-0 ceph-mon[75251]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:02 compute-0 ceph-mon[75251]: Saving service mon spec with placement count:5
Jan 31 05:54:02 compute-0 systemd[1]: Started libpod-conmon-51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49.scope.
Jan 31 05:54:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb68a4ffca802e13db9f9b7d3fb8f85b5a0573ea7ac9e9d65a8dc3ff48b4b78b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb68a4ffca802e13db9f9b7d3fb8f85b5a0573ea7ac9e9d65a8dc3ff48b4b78b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb68a4ffca802e13db9f9b7d3fb8f85b5a0573ea7ac9e9d65a8dc3ff48b4b78b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:02 compute-0 ceph-mgr[75550]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 05:54:03 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:03 compute-0 podman[77138]: 2026-01-31 05:54:03.732431366 +0000 UTC m=+2.320810927 container init 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:54:03 compute-0 podman[77138]: 2026-01-31 05:54:03.740169757 +0000 UTC m=+2.328549238 container start 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:54:04 compute-0 podman[77138]: 2026-01-31 05:54:04.016967683 +0000 UTC m=+2.605347164 container attach 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:04 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:04 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 05:54:04 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 05:54:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.111762721 +0000 UTC m=+2.655080155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.228784916 +0000 UTC m=+2.772102340 container create e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:54:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:04 compute-0 relaxed_montalcini[77182]: Scheduled mgr update...
Jan 31 05:54:04 compute-0 systemd[1]: Started libpod-conmon-e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073.scope.
Jan 31 05:54:04 compute-0 systemd[1]: libpod-51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49.scope: Deactivated successfully.
Jan 31 05:54:04 compute-0 podman[77138]: 2026-01-31 05:54:04.300666012 +0000 UTC m=+2.889045493 container died 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:04 compute-0 ceph-mgr[75550]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 05:54:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:04 compute-0 ceph-mon[75251]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.393572003 +0000 UTC m=+2.936889447 container init e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.398007138 +0000 UTC m=+2.941324532 container start e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.404476454 +0000 UTC m=+2.947793858 container attach e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 05:54:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb68a4ffca802e13db9f9b7d3fb8f85b5a0573ea7ac9e9d65a8dc3ff48b4b78b-merged.mount: Deactivated successfully.
Jan 31 05:54:04 compute-0 stupefied_jang[77210]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 05:54:04 compute-0 systemd[1]: libpod-e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073.scope: Deactivated successfully.
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.495022643 +0000 UTC m=+3.038340037 container died e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:54:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba2457ad221791e8dc60a64ff05f2ea8adc60a843c11e9c677a35e05d376ae86-merged.mount: Deactivated successfully.
Jan 31 05:54:04 compute-0 podman[77167]: 2026-01-31 05:54:04.755461617 +0000 UTC m=+3.298779041 container remove e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073 (image=quay.io/ceph/ceph:v20, name=stupefied_jang, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:04 compute-0 sudo[77033]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 31 05:54:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:05 compute-0 podman[77138]: 2026-01-31 05:54:05.068312126 +0000 UTC m=+3.656691637 container remove 51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49 (image=quay.io/ceph/ceph:v20, name=relaxed_montalcini, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:05 compute-0 systemd[1]: libpod-conmon-51ec500025217ce6e4e8a7425ff13006de1fc7778ba80090f9d2a9ed986cdf49.scope: Deactivated successfully.
Jan 31 05:54:05 compute-0 sudo[77247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:05 compute-0 sudo[77247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:05 compute-0 sudo[77247]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.203469135 +0000 UTC m=+0.113405459 container create 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.118009265 +0000 UTC m=+0.027945649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:05 compute-0 sudo[77279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 05:54:05 compute-0 sudo[77279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:05 compute-0 systemd[1]: Started libpod-conmon-6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9.scope.
Jan 31 05:54:05 compute-0 systemd[1]: libpod-conmon-e88420a47379e5ba4460e1310be445db751b1d0a453a4f0a887bbaededeb5073.scope: Deactivated successfully.
Jan 31 05:54:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe05269ebd2253d0e2735cd5e79fd8212b747d2d8eedb9d0f6a7fb857205b93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe05269ebd2253d0e2735cd5e79fd8212b747d2d8eedb9d0f6a7fb857205b93/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe05269ebd2253d0e2735cd5e79fd8212b747d2d8eedb9d0f6a7fb857205b93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:05 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:05 compute-0 ceph-mon[75251]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:05 compute-0 ceph-mon[75251]: Saving service mgr spec with placement count:2
Jan 31 05:54:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:05 compute-0 ceph-mon[75251]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:05 compute-0 ceph-mon[75251]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 05:54:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.354936025 +0000 UTC m=+0.264872349 container init 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.362692346 +0000 UTC m=+0.272628640 container start 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.474175928 +0000 UTC m=+0.384112222 container attach 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:54:05 compute-0 sudo[77279]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:05 compute-0 sudo[77352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:05 compute-0 sudo[77352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:05 compute-0 sudo[77352]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:05 compute-0 sudo[77377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:54:05 compute-0 sudo[77377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:05 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:05 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 05:54:05 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 05:54:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 05:54:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:05 compute-0 loving_hofstadter[77306]: Scheduled crash update...
Jan 31 05:54:05 compute-0 systemd[1]: libpod-6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9.scope: Deactivated successfully.
Jan 31 05:54:05 compute-0 podman[77240]: 2026-01-31 05:54:05.853478501 +0000 UTC m=+0.763414785 container died 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe05269ebd2253d0e2735cd5e79fd8212b747d2d8eedb9d0f6a7fb857205b93-merged.mount: Deactivated successfully.
Jan 31 05:54:06 compute-0 podman[77240]: 2026-01-31 05:54:06.215331144 +0000 UTC m=+1.125267478 container remove 6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9 (image=quay.io/ceph/ceph:v20, name=loving_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.29492878 +0000 UTC m=+0.064481208 container create b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:06 compute-0 systemd[1]: Started libpod-conmon-b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e.scope.
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.268792075 +0000 UTC m=+0.038344583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c31cd4a9dc4e49f2e5af59efad67bc7a08a74e89c0840c14e2c3c77ab0fdcf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c31cd4a9dc4e49f2e5af59efad67bc7a08a74e89c0840c14e2c3c77ab0fdcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c31cd4a9dc4e49f2e5af59efad67bc7a08a74e89c0840c14e2c3c77ab0fdcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.419636834 +0000 UTC m=+0.189189282 container init b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.425466658 +0000 UTC m=+0.195019086 container start b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.448811995 +0000 UTC m=+0.218364463 container attach b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:54:06 compute-0 systemd[1]: libpod-conmon-6f3100e5c33cb774e0fae54116fbe075ca7be1ac8faf563988b72b06c318edd9.scope: Deactivated successfully.
Jan 31 05:54:06 compute-0 podman[77478]: 2026-01-31 05:54:06.519761938 +0000 UTC m=+0.117703230 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:54:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:06 compute-0 ceph-mon[75251]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:06 compute-0 ceph-mon[75251]: Saving service crash spec with placement *
Jan 31 05:54:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:06 compute-0 ceph-mon[75251]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:06 compute-0 podman[77478]: 2026-01-31 05:54:06.63642965 +0000 UTC m=+0.234370932 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 31 05:54:06 compute-0 sudo[77377]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354529543' entity='client.admin' 
Jan 31 05:54:06 compute-0 podman[77429]: 2026-01-31 05:54:06.871556059 +0000 UTC m=+0.641108467 container died b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:06 compute-0 systemd[1]: libpod-b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e.scope: Deactivated successfully.
Jan 31 05:54:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:07 compute-0 sudo[77592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:07 compute-0 sudo[77592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:07 compute-0 sudo[77592]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-86c31cd4a9dc4e49f2e5af59efad67bc7a08a74e89c0840c14e2c3c77ab0fdcf-merged.mount: Deactivated successfully.
Jan 31 05:54:07 compute-0 podman[77429]: 2026-01-31 05:54:07.077008688 +0000 UTC m=+0.846561136 container remove b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e (image=quay.io/ceph/ceph:v20, name=beautiful_curie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:54:07 compute-0 systemd[1]: libpod-conmon-b2add383591a683c392f43dbe26c34ca2522c43303cc8676cfe05e2e9a571d0e.scope: Deactivated successfully.
Jan 31 05:54:07 compute-0 sudo[77618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 05:54:07 compute-0 sudo[77618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.162761949 +0000 UTC m=+0.066512168 container create 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:54:07 compute-0 systemd[1]: Started libpod-conmon-7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827.scope.
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.127403892 +0000 UTC m=+0.031154171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f2ddefb8d57e38b1ff0cd30bfacee81ea00701be43a712f24576af0625bdd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f2ddefb8d57e38b1ff0cd30bfacee81ea00701be43a712f24576af0625bdd3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f2ddefb8d57e38b1ff0cd30bfacee81ea00701be43a712f24576af0625bdd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.265435962 +0000 UTC m=+0.169186221 container init 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.272356675 +0000 UTC m=+0.176106904 container start 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.278602693 +0000 UTC m=+0.182352912 container attach 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:07 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:07 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77676 (sysctl)
Jan 31 05:54:07 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 05:54:07 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 05:54:07 compute-0 sudo[77618]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:07 compute-0 sudo[77717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:07 compute-0 sudo[77717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:07 compute-0 sudo[77717]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:07 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 31 05:54:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:07 compute-0 sudo[77742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 05:54:07 compute-0 sudo[77742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:07 compute-0 systemd[1]: libpod-7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827.scope: Deactivated successfully.
Jan 31 05:54:07 compute-0 conmon[77659]: conmon 7bb00f39e2a6c41f74bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827.scope/container/memory.events
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.710309611 +0000 UTC m=+0.614059820 container died 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f2ddefb8d57e38b1ff0cd30bfacee81ea00701be43a712f24576af0625bdd3-merged.mount: Deactivated successfully.
Jan 31 05:54:07 compute-0 podman[77641]: 2026-01-31 05:54:07.776169595 +0000 UTC m=+0.679919814 container remove 7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827 (image=quay.io/ceph/ceph:v20, name=cool_shaw, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:07 compute-0 systemd[1]: libpod-conmon-7bb00f39e2a6c41f74bb7ee9a73e5b7241af99889cfc6b6a5b168b54bb357827.scope: Deactivated successfully.
Jan 31 05:54:07 compute-0 podman[77783]: 2026-01-31 05:54:07.856935522 +0000 UTC m=+0.063529834 container create 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:07 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/354529543' entity='client.admin' 
Jan 31 05:54:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:07 compute-0 systemd[1]: Started libpod-conmon-1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1.scope.
Jan 31 05:54:07 compute-0 podman[77783]: 2026-01-31 05:54:07.81287552 +0000 UTC m=+0.019469862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732e6bdd118d7f6b949b8d7aa8ffbfead97e1e6fc0574d9951cbe7e9007344d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732e6bdd118d7f6b949b8d7aa8ffbfead97e1e6fc0574d9951cbe7e9007344d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732e6bdd118d7f6b949b8d7aa8ffbfead97e1e6fc0574d9951cbe7e9007344d8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:07 compute-0 podman[77783]: 2026-01-31 05:54:07.961939347 +0000 UTC m=+0.168533709 container init 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:54:07 compute-0 podman[77783]: 2026-01-31 05:54:07.969137278 +0000 UTC m=+0.175731590 container start 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:07 compute-0 sudo[77742]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:08 compute-0 podman[77783]: 2026-01-31 05:54:08.193362035 +0000 UTC m=+0.399956397 container attach 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:54:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:08 compute-0 sudo[77839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:08 compute-0 sudo[77839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:08 compute-0 sudo[77839]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:08 compute-0 sudo[77864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- inventory --format=json-pretty --filter-for-batch
Jan 31 05:54:08 compute-0 sudo[77864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:08 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:54:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:08 compute-0 ceph-mgr[75550]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 05:54:08 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 05:54:08 compute-0 boring_morse[77799]: Added label _admin to host compute-0
Jan 31 05:54:08 compute-0 systemd[1]: libpod-1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1.scope: Deactivated successfully.
Jan 31 05:54:08 compute-0 podman[77902]: 2026-01-31 05:54:08.5992694 +0000 UTC m=+0.019754492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:08 compute-0 podman[77902]: 2026-01-31 05:54:08.732475081 +0000 UTC m=+0.152960193 container create 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:54:08 compute-0 podman[77783]: 2026-01-31 05:54:08.736254144 +0000 UTC m=+0.942848456 container died 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-732e6bdd118d7f6b949b8d7aa8ffbfead97e1e6fc0574d9951cbe7e9007344d8-merged.mount: Deactivated successfully.
Jan 31 05:54:09 compute-0 podman[77783]: 2026-01-31 05:54:09.248445777 +0000 UTC m=+1.455040089 container remove 1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1 (image=quay.io/ceph/ceph:v20, name=boring_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:54:09 compute-0 systemd[1]: libpod-conmon-1424d16c05c1085145bf9f8f75be5787e2d0a6473649bcf1d1d2d26ac83eeac1.scope: Deactivated successfully.
Jan 31 05:54:09 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:09 compute-0 ceph-mon[75251]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:09 compute-0 ceph-mon[75251]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:09 compute-0 ceph-mon[75251]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:09 compute-0 systemd[1]: Started libpod-conmon-5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed.scope.
Jan 31 05:54:09 compute-0 podman[77931]: 2026-01-31 05:54:09.366980675 +0000 UTC m=+0.102552480 container create f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:09 compute-0 podman[77931]: 2026-01-31 05:54:09.33799794 +0000 UTC m=+0.073569745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:09 compute-0 systemd[1]: Started libpod-conmon-f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7.scope.
Jan 31 05:54:09 compute-0 podman[77902]: 2026-01-31 05:54:09.495506023 +0000 UTC m=+0.915991195 container init 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:09 compute-0 podman[77902]: 2026-01-31 05:54:09.502775937 +0000 UTC m=+0.923261039 container start 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:54:09 compute-0 charming_sinoussi[77947]: 167 167
Jan 31 05:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:09 compute-0 systemd[1]: libpod-5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed.scope: Deactivated successfully.
Jan 31 05:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a91ae2ea982d5642863dbd5844df1ea97c87210f22344242e104d43bade4c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a91ae2ea982d5642863dbd5844df1ea97c87210f22344242e104d43bade4c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73a91ae2ea982d5642863dbd5844df1ea97c87210f22344242e104d43bade4c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:09 compute-0 podman[77902]: 2026-01-31 05:54:09.54286373 +0000 UTC m=+0.963348842 container attach 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:54:09 compute-0 podman[77902]: 2026-01-31 05:54:09.543309375 +0000 UTC m=+0.963794477 container died 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b852f55849e8e14c1065f4b0f0b820bbbea048d99182705fdcc98fba5d088952-merged.mount: Deactivated successfully.
Jan 31 05:54:09 compute-0 podman[77902]: 2026-01-31 05:54:09.726659392 +0000 UTC m=+1.147144494 container remove 5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:09 compute-0 systemd[1]: libpod-conmon-5db3a32e2e1a29ff511817cdedeef8973c39129da0762405598bdd63252793ed.scope: Deactivated successfully.
Jan 31 05:54:09 compute-0 podman[77931]: 2026-01-31 05:54:09.776483945 +0000 UTC m=+0.512055770 container init f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:09 compute-0 podman[77931]: 2026-01-31 05:54:09.783210851 +0000 UTC m=+0.518782686 container start f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:09 compute-0 podman[77931]: 2026-01-31 05:54:09.803594274 +0000 UTC m=+0.539166099 container attach f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:54:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 31 05:54:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/307805268' entity='client.admin' 
Jan 31 05:54:10 compute-0 objective_merkle[77952]: set mgr/dashboard/cluster/status
Jan 31 05:54:10 compute-0 systemd[1]: libpod-f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7.scope: Deactivated successfully.
Jan 31 05:54:10 compute-0 podman[77931]: 2026-01-31 05:54:10.377169876 +0000 UTC m=+1.112741691 container died f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:10 compute-0 ceph-mon[75251]: Added label _admin to host compute-0
Jan 31 05:54:10 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/307805268' entity='client.admin' 
Jan 31 05:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a91ae2ea982d5642863dbd5844df1ea97c87210f22344242e104d43bade4c5-merged.mount: Deactivated successfully.
Jan 31 05:54:10 compute-0 podman[77931]: 2026-01-31 05:54:10.547319771 +0000 UTC m=+1.282891616 container remove f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7 (image=quay.io/ceph/ceph:v20, name=objective_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:10 compute-0 systemd[1]: libpod-conmon-f6c01848f079dacae29dd3b032b0ba2b2a71f90edf6be8d0af24e0ad75be3cc7.scope: Deactivated successfully.
Jan 31 05:54:10 compute-0 systemd[1]: Reloading.
Jan 31 05:54:10 compute-0 systemd-sysv-generator[78040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:10 compute-0 systemd-rc-local-generator[78034]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:10 compute-0 sudo[74181]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.072620454 +0000 UTC m=+0.087157182 container create a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.02078564 +0000 UTC m=+0.035322418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:11 compute-0 systemd[1]: Started libpod-conmon-a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528.scope.
Jan 31 05:54:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad81fa30cc86ca249b38407d2a99ea6170b2dae966fb03c29c69b4529265c138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad81fa30cc86ca249b38407d2a99ea6170b2dae966fb03c29c69b4529265c138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad81fa30cc86ca249b38407d2a99ea6170b2dae966fb03c29c69b4529265c138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad81fa30cc86ca249b38407d2a99ea6170b2dae966fb03c29c69b4529265c138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 sudo[78095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueayuvukhexpgprrqluoiaijhadkwvbt ; /usr/bin/python3'
Jan 31 05:54:11 compute-0 sudo[78095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.218323052 +0000 UTC m=+0.232859770 container init a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.225875507 +0000 UTC m=+0.240412195 container start a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.252024212 +0000 UTC m=+0.266560930 container attach a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:11 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:11 compute-0 python3[78097]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:11 compute-0 ceph-mon[75251]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:11 compute-0 podman[78100]: 2026-01-31 05:54:11.438028081 +0000 UTC m=+0.063502693 container create 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:11 compute-0 podman[78100]: 2026-01-31 05:54:11.391583475 +0000 UTC m=+0.017058117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:11 compute-0 systemd[1]: Started libpod-conmon-9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681.scope.
Jan 31 05:54:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f95693de92e5c952c5db1fdd292b77ce3dc44bbd15c6fb3522c23a75f1d3db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f95693de92e5c952c5db1fdd292b77ce3dc44bbd15c6fb3522c23a75f1d3db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:11 compute-0 podman[78100]: 2026-01-31 05:54:11.624225357 +0000 UTC m=+0.249700029 container init 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:54:11 compute-0 podman[78100]: 2026-01-31 05:54:11.630521517 +0000 UTC m=+0.255996119 container start 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:11 compute-0 podman[78100]: 2026-01-31 05:54:11.64718914 +0000 UTC m=+0.272663762 container attach 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:54:11 compute-0 happy_margulis[78070]: [
Jan 31 05:54:11 compute-0 happy_margulis[78070]:     {
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "available": false,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "being_replaced": false,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "ceph_device_lvm": false,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "lsm_data": {},
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "lvs": [],
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "path": "/dev/sr0",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "rejected_reasons": [
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "Has a FileSystem",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "Insufficient space (<5GB)"
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         ],
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         "sys_api": {
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "actuators": null,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "device_nodes": [
Jan 31 05:54:11 compute-0 happy_margulis[78070]:                 "sr0"
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             ],
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "devname": "sr0",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "human_readable_size": "482.00 KB",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "id_bus": "ata",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "model": "QEMU DVD-ROM",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "nr_requests": "2",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "parent": "/dev/sr0",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "partitions": {},
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "path": "/dev/sr0",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "removable": "1",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "rev": "2.5+",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "ro": "0",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "rotational": "1",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "sas_address": "",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "sas_device_handle": "",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "scheduler_mode": "mq-deadline",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "sectors": 0,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "sectorsize": "2048",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "size": 493568.0,
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "support_discard": "2048",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "type": "disk",
Jan 31 05:54:11 compute-0 happy_margulis[78070]:             "vendor": "QEMU"
Jan 31 05:54:11 compute-0 happy_margulis[78070]:         }
Jan 31 05:54:11 compute-0 happy_margulis[78070]:     }
Jan 31 05:54:11 compute-0 happy_margulis[78070]: ]
Jan 31 05:54:11 compute-0 systemd[1]: libpod-a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528.scope: Deactivated successfully.
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.676512027 +0000 UTC m=+0.691048765 container died a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad81fa30cc86ca249b38407d2a99ea6170b2dae966fb03c29c69b4529265c138-merged.mount: Deactivated successfully.
Jan 31 05:54:11 compute-0 podman[78053]: 2026-01-31 05:54:11.858833327 +0000 UTC m=+0.873370005 container remove a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:11 compute-0 systemd[1]: libpod-conmon-a5ad6c334d464abc24dc940cff92ad418d606cf7eb8b0ab7af01545386798528.scope: Deactivated successfully.
Jan 31 05:54:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:11 compute-0 sudo[77864]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3586308451' entity='client.admin' 
Jan 31 05:54:12 compute-0 systemd[1]: libpod-9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681.scope: Deactivated successfully.
Jan 31 05:54:12 compute-0 podman[78100]: 2026-01-31 05:54:12.245584451 +0000 UTC m=+0.871059023 container died 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:12 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 05:54:12 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 05:54:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-09f95693de92e5c952c5db1fdd292b77ce3dc44bbd15c6fb3522c23a75f1d3db-merged.mount: Deactivated successfully.
Jan 31 05:54:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:12 compute-0 sudo[78837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 05:54:12 compute-0 sudo[78837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78837]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[78862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph
Jan 31 05:54:12 compute-0 sudo[78862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78862]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 podman[78100]: 2026-01-31 05:54:12.454924056 +0000 UTC m=+1.080398658 container remove 9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681 (image=quay.io/ceph/ceph:v20, name=dreamy_carson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:12 compute-0 sudo[78095]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[78887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[78887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78887]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 systemd[1]: libpod-conmon-9203495f453401adb7bb7a35e22c1fe17d4cebd3e91a2cbcc2dabb2814310681.scope: Deactivated successfully.
Jan 31 05:54:12 compute-0 sudo[78912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:12 compute-0 sudo[78912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78912]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[78937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[78937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78937]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[78985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[78985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[78985]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[79010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79010]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 05:54:12 compute-0 sudo[79035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79035]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf
Jan 31 05:54:12 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf
Jan 31 05:54:12 compute-0 sudo[79060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config
Jan 31 05:54:12 compute-0 sudo[79060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79060]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config
Jan 31 05:54:12 compute-0 sudo[79089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79089]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[79151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79151]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:12 compute-0 sudo[79206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79206]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:12 compute-0 sudo[79235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf.new
Jan 31 05:54:12 compute-0 sudo[79235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:12 compute-0 sudo[79235]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf.new
Jan 31 05:54:13 compute-0 sudo[79283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79283]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf.new
Jan 31 05:54:13 compute-0 sudo[79331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79331]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3586308451' entity='client.admin' 
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:13 compute-0 ceph-mon[75251]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 05:54:13 compute-0 ceph-mon[75251]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:13 compute-0 sudo[79380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf.new /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf
Jan 31 05:54:13 compute-0 sudo[79380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzfkunjjejmnfevhhhpmeihamrnejljx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769838852.795153-36459-163846860416952/async_wrapper.py j487473450352 30 /home/zuul/.ansible/tmp/ansible-tmp-1769838852.795153-36459-163846860416952/AnsiballZ_command.py _'
Jan 31 05:54:13 compute-0 sudo[79428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:13 compute-0 sudo[79380]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 05:54:13 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 05:54:13 compute-0 sudo[79433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 05:54:13 compute-0 sudo[79433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79433]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph
Jan 31 05:54:13 compute-0 sudo[79458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79458]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79483]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:13 compute-0 ansible-async_wrapper.py[79432]: Invoked with j487473450352 30 /home/zuul/.ansible/tmp/ansible-tmp-1769838852.795153-36459-163846860416952/AnsiballZ_command.py _
Jan 31 05:54:13 compute-0 ansible-async_wrapper.py[79514]: Starting module and watcher
Jan 31 05:54:13 compute-0 ansible-async_wrapper.py[79514]: Start watching 79516 (30)
Jan 31 05:54:13 compute-0 ansible-async_wrapper.py[79516]: Start module (79516)
Jan 31 05:54:13 compute-0 ansible-async_wrapper.py[79432]: Return async_wrapper task started.
Jan 31 05:54:13 compute-0 sudo[79428]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:13 compute-0 sudo[79508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79508]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79538]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 python3[79519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:13 compute-0 sudo[79586]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79616]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 podman[79611]: 2026-01-31 05:54:13.57546764 +0000 UTC m=+0.077789994 container create d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:54:13 compute-0 sudo[79649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 05:54:13 compute-0 sudo[79649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79649]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring
Jan 31 05:54:13 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring
Jan 31 05:54:13 compute-0 podman[79611]: 2026-01-31 05:54:13.520927381 +0000 UTC m=+0.023249745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:13 compute-0 sudo[79674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config
Jan 31 05:54:13 compute-0 sudo[79674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79674]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config
Jan 31 05:54:13 compute-0 sudo[79699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79699]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 systemd[1]: Started libpod-conmon-d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574.scope.
Jan 31 05:54:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab4ba31cc2e7a9ab27c25229469e1646d9433b53348d3682d7fe0f383bacfd6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab4ba31cc2e7a9ab27c25229469e1646d9433b53348d3682d7fe0f383bacfd6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:13 compute-0 sudo[79726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79726]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 podman[79611]: 2026-01-31 05:54:13.769547031 +0000 UTC m=+0.271869425 container init d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:54:13 compute-0 podman[79611]: 2026-01-31 05:54:13.778672081 +0000 UTC m=+0.280994445 container start d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:13 compute-0 podman[79611]: 2026-01-31 05:54:13.800499955 +0000 UTC m=+0.302822569 container attach d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:13 compute-0 sudo[79754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:13 compute-0 sudo[79754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79754]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79780]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:13 compute-0 sudo[79847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring.new
Jan 31 05:54:13 compute-0 sudo[79847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:13 compute-0 sudo[79847]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:14 compute-0 sudo[79872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring.new
Jan 31 05:54:14 compute-0 sudo[79872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:14 compute-0 sudo[79872]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:14 compute-0 sudo[79897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-797ee2fc-ca49-5eee-87c0-542bb035a7d7/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring.new /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring
Jan 31 05:54:14 compute-0 sudo[79897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:14 compute-0 sudo[79897]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:14 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:54:14 compute-0 inspiring_nightingale[79727]: 
Jan 31 05:54:14 compute-0 inspiring_nightingale[79727]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 05:54:14 compute-0 systemd[1]: libpod-d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574.scope: Deactivated successfully.
Jan 31 05:54:14 compute-0 podman[79611]: 2026-01-31 05:54:14.250208412 +0000 UTC m=+0.752530776 container died d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:54:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:14 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 97c0fee8-cd14-4651-8a11-b244c73fba6a (Updating crash deployment (+1 -> 1))
Jan 31 05:54:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 05:54:14 compute-0 ceph-mon[75251]: Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.conf
Jan 31 05:54:14 compute-0 ceph-mon[75251]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 05:54:14 compute-0 ceph-mon[75251]: Updating compute-0:/var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/config/ceph.client.admin.keyring
Jan 31 05:54:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 05:54:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-dab4ba31cc2e7a9ab27c25229469e1646d9433b53348d3682d7fe0f383bacfd6-merged.mount: Deactivated successfully.
Jan 31 05:54:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:14 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:14 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 05:54:14 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 05:54:14 compute-0 sudo[79961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:14 compute-0 sudo[79961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:14 compute-0 sudo[79961]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:14 compute-0 podman[79611]: 2026-01-31 05:54:14.547286408 +0000 UTC m=+1.049608752 container remove d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574 (image=quay.io/ceph/ceph:v20, name=inspiring_nightingale, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:14 compute-0 systemd[1]: libpod-conmon-d70da685669d5cc2a06263fe4e832ab6167f6ec8a4bb552bf4b5d4b19aa4b574.scope: Deactivated successfully.
Jan 31 05:54:14 compute-0 sudo[79986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:14 compute-0 ansible-async_wrapper.py[79516]: Module complete (79516)
Jan 31 05:54:14 compute-0 sudo[79986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:14 compute-0 sudo[80034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaczbosvluaifaiodoifaoahyykxusfk ; /usr/bin/python3'
Jan 31 05:54:14 compute-0 sudo[80034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:14 compute-0 python3[80036]: ansible-ansible.legacy.async_status Invoked with jid=j487473450352.79432 mode=status _async_dir=/root/.ansible_async
Jan 31 05:54:14 compute-0 sudo[80034]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:14 compute-0 sudo[80113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxovujlgvkiamjoqcxckeemobyyptli ; /usr/bin/python3'
Jan 31 05:54:14 compute-0 sudo[80113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:14.941870837 +0000 UTC m=+0.024802599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:15 compute-0 python3[80124]: ansible-ansible.legacy.async_status Invoked with jid=j487473450352.79432 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 05:54:15 compute-0 sudo[80113]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.073611667 +0000 UTC m=+0.156543449 container create aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:54:15 compute-0 systemd[1]: Started libpod-conmon-aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5.scope.
Jan 31 05:54:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.240803818 +0000 UTC m=+0.323735580 container init aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.247850104 +0000 UTC m=+0.330781886 container start aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:54:15 compute-0 nervous_babbage[80141]: 167 167
Jan 31 05:54:15 compute-0 systemd[1]: libpod-aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5.scope: Deactivated successfully.
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.30031269 +0000 UTC m=+0.383244542 container attach aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.300813608 +0000 UTC m=+0.383745380 container died aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:15 compute-0 sudo[80179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlsypkusbzwsleuhcjaazqfxswetjrg ; /usr/bin/python3'
Jan 31 05:54:15 compute-0 sudo[80179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2ddd8950c0d96732a169b55db4139d7d5cf3d40051a69a782b0c956c473c2d-merged.mount: Deactivated successfully.
Jan 31 05:54:15 compute-0 ceph-mon[75251]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:54:15 compute-0 ceph-mon[75251]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 05:54:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 05:54:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:15 compute-0 ceph-mon[75251]: Deploying daemon crash.compute-0 on compute-0
Jan 31 05:54:15 compute-0 podman[80125]: 2026-01-31 05:54:15.542880359 +0000 UTC m=+0.625812151 container remove aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_babbage, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:54:15 compute-0 python3[80181]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 05:54:15 compute-0 sudo[80179]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:15 compute-0 systemd[1]: Reloading.
Jan 31 05:54:15 compute-0 systemd-sysv-generator[80213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:15 compute-0 systemd-rc-local-generator[80208]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:15 compute-0 systemd[1]: libpod-conmon-aed84954474162fb1cc8960c0283fe2f78be3686e826062318a78fba39d2f2f5.scope: Deactivated successfully.
Jan 31 05:54:15 compute-0 sudo[80245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbkkgvmzbkqkixleynjuobraebzcozn ; /usr/bin/python3'
Jan 31 05:54:15 compute-0 sudo[80245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:15 compute-0 systemd[1]: Reloading.
Jan 31 05:54:15 compute-0 systemd-rc-local-generator[80276]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:15 compute-0 systemd-sysv-generator[80279]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:15 compute-0 python3[80251]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.071999665 +0000 UTC m=+0.055742502 container create faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:16 compute-0 systemd[1]: Started libpod-conmon-faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce.scope.
Jan 31 05:54:16 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3700b846a503abc4d63c72bc6d68b0339a3657ed842cc1fd35a712a2549d5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3700b846a503abc4d63c72bc6d68b0339a3657ed842cc1fd35a712a2549d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b3700b846a503abc4d63c72bc6d68b0339a3657ed842cc1fd35a712a2549d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.049687754 +0000 UTC m=+0.033430681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.164997189 +0000 UTC m=+0.148740036 container init faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.169645692 +0000 UTC m=+0.153388529 container start faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.175143644 +0000 UTC m=+0.158886481 container attach faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:54:16 compute-0 podman[80373]: 2026-01-31 05:54:16.325582289 +0000 UTC m=+0.049060578 container create 20fded41c264ba18393cf46e0be14f2569ace1b3c6b5a5f14d92959a746e2bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a82e2ae240274b641672c9606e7c919abcb67c9b31b3dd059b7ec6cca341918/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a82e2ae240274b641672c9606e7c919abcb67c9b31b3dd059b7ec6cca341918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a82e2ae240274b641672c9606e7c919abcb67c9b31b3dd059b7ec6cca341918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a82e2ae240274b641672c9606e7c919abcb67c9b31b3dd059b7ec6cca341918/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:16 compute-0 podman[80373]: 2026-01-31 05:54:16.391082461 +0000 UTC m=+0.114560750 container init 20fded41c264ba18393cf46e0be14f2569ace1b3c6b5a5f14d92959a746e2bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:16 compute-0 podman[80373]: 2026-01-31 05:54:16.296639886 +0000 UTC m=+0.020118275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:16 compute-0 podman[80373]: 2026-01-31 05:54:16.394625535 +0000 UTC m=+0.118103834 container start 20fded41c264ba18393cf46e0be14f2569ace1b3c6b5a5f14d92959a746e2bfe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:54:16 compute-0 bash[80373]: 20fded41c264ba18393cf46e0be14f2569ace1b3c6b5a5f14d92959a746e2bfe
Jan 31 05:54:16 compute-0 systemd[1]: Started Ceph crash.compute-0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:16 compute-0 sudo[79986]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 97c0fee8-cd14-4651-8a11-b244c73fba6a (Updating crash deployment (+1 -> 1))
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 97c0fee8-cd14-4651-8a11-b244c73fba6a (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 33c42f29-874b-415a-a5c5-fdc4717b3a74 (Updating mgr deployment (+1 -> 2))
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.stcefq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.stcefq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.stcefq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:16 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.stcefq on compute-0
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.stcefq on compute-0
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.539+0000 7fb010e7d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.539+0000 7fb010e7d640 -1 AuthRegistry(0x7fb00c052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 05:54:16 compute-0 sudo[80395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.540+0000 7fb010e7d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.540+0000 7fb010e7d640 -1 AuthRegistry(0x7fb010e7bfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.541+0000 7fb00a575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: 2026-01-31T05:54:16.542+0000 7fb010e7d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 05:54:16 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-crash-compute-0[80388]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 05:54:16 compute-0 sudo[80395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:16 compute-0 sudo[80395]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:16 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:54:16 compute-0 priceless_wing[80303]: 
Jan 31 05:54:16 compute-0 priceless_wing[80303]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 05:54:16 compute-0 systemd[1]: libpod-faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce.scope: Deactivated successfully.
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.584973536 +0000 UTC m=+0.568716413 container died faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:16 compute-0 sudo[80430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:16 compute-0 sudo[80430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b3700b846a503abc4d63c72bc6d68b0339a3657ed842cc1fd35a712a2549d5-merged.mount: Deactivated successfully.
Jan 31 05:54:16 compute-0 podman[80285]: 2026-01-31 05:54:16.674607623 +0000 UTC m=+0.658350500 container remove faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce (image=quay.io/ceph/ceph:v20, name=priceless_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:16 compute-0 sudo[80245]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:16 compute-0 systemd[1]: libpod-conmon-faac47620725c7c9eabac33a3fffc1132eab3d7255cd9a6eb09d42bafec3e6ce.scope: Deactivated successfully.
Jan 31 05:54:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:16 compute-0 podman[80510]: 2026-01-31 05:54:16.971399029 +0000 UTC m=+0.044316192 container create b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:54:16 compute-0 sudo[80547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qppcqfbxplfreufnqbuoktllxdanyiia ; /usr/bin/python3'
Jan 31 05:54:16 compute-0 sudo[80547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:17 compute-0 systemd[1]: Started libpod-conmon-b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f.scope.
Jan 31 05:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:17.032936602 +0000 UTC m=+0.105853815 container init b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:17.037686469 +0000 UTC m=+0.110603672 container start b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:17 compute-0 bold_goldstine[80552]: 167 167
Jan 31 05:54:17 compute-0 systemd[1]: libpod-b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f.scope: Deactivated successfully.
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:16.946927523 +0000 UTC m=+0.019844766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:17.041763891 +0000 UTC m=+0.114681144 container attach b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:17.044412694 +0000 UTC m=+0.117329897 container died b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2600e5e7aabab6e310ba3bf8d8bea7e92d475c0927b09bc71a1b65400f7051e-merged.mount: Deactivated successfully.
Jan 31 05:54:17 compute-0 podman[80510]: 2026-01-31 05:54:17.129012355 +0000 UTC m=+0.201929558 container remove b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:17 compute-0 systemd[1]: libpod-conmon-b4160b36b5e2fa124db20a4bd50fdc150d944a94b5841bdd22347e67bb2a6a3f.scope: Deactivated successfully.
Jan 31 05:54:17 compute-0 python3[80549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.212408903 +0000 UTC m=+0.051087149 container create e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:17 compute-0 systemd[1]: Reloading.
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.191208861 +0000 UTC m=+0.029892977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:17 compute-0 systemd-rc-local-generator[80605]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:17 compute-0 systemd-sysv-generator[80615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:17 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:17 compute-0 ceph-mon[75251]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.stcefq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.stcefq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:17 compute-0 ceph-mon[75251]: Deploying daemon mgr.compute-0.stcefq on compute-0
Jan 31 05:54:17 compute-0 ceph-mon[75251]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:54:17 compute-0 systemd[1]: Started libpod-conmon-e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7.scope.
Jan 31 05:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08447708c138a8c5d8e43884b0037d966297cf55a8878155c2acf18ed1e5cb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08447708c138a8c5d8e43884b0037d966297cf55a8878155c2acf18ed1e5cb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f08447708c138a8c5d8e43884b0037d966297cf55a8878155c2acf18ed1e5cb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:17 compute-0 systemd[1]: Reloading.
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.529822611 +0000 UTC m=+0.368500957 container init e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.539918154 +0000 UTC m=+0.378596400 container start e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.544502595 +0000 UTC m=+0.383180881 container attach e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:54:17 compute-0 systemd-sysv-generator[80653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:17 compute-0 systemd-rc-local-generator[80650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:17 compute-0 systemd[1]: Starting Ceph mgr.compute-0.stcefq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 31 05:54:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1085691184' entity='client.admin' 
Jan 31 05:54:17 compute-0 systemd[1]: libpod-e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7.scope: Deactivated successfully.
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.920415539 +0000 UTC m=+0.759093815 container died e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f08447708c138a8c5d8e43884b0037d966297cf55a8878155c2acf18ed1e5cb1-merged.mount: Deactivated successfully.
Jan 31 05:54:17 compute-0 podman[80572]: 2026-01-31 05:54:17.97015174 +0000 UTC m=+0.808829996 container remove e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7 (image=quay.io/ceph/ceph:v20, name=sad_brown, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:54:17 compute-0 systemd[1]: libpod-conmon-e371706b40ea3d3e221262d2d79c74dae33b3b498dea9393ce43faf5596ee6b7.scope: Deactivated successfully.
Jan 31 05:54:18 compute-0 sudo[80547]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:18 compute-0 podman[80747]: 2026-01-31 05:54:18.076500362 +0000 UTC m=+0.046873031 container create 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8775b4f6fbe7a76ab84860585351c18bb156189890184befe3fdc930d9477e96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8775b4f6fbe7a76ab84860585351c18bb156189890184befe3fdc930d9477e96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8775b4f6fbe7a76ab84860585351c18bb156189890184befe3fdc930d9477e96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8775b4f6fbe7a76ab84860585351c18bb156189890184befe3fdc930d9477e96/merged/var/lib/ceph/mgr/ceph-compute-0.stcefq supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 podman[80747]: 2026-01-31 05:54:18.050623826 +0000 UTC m=+0.020996545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:18 compute-0 podman[80747]: 2026-01-31 05:54:18.174617506 +0000 UTC m=+0.144990245 container init 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:54:18 compute-0 sudo[80788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhjjqgiewakibcuykhwseaqznwkcejf ; /usr/bin/python3'
Jan 31 05:54:18 compute-0 sudo[80788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:18 compute-0 podman[80747]: 2026-01-31 05:54:18.184259273 +0000 UTC m=+0.154631922 container start 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:54:18 compute-0 bash[80747]: 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3
Jan 31 05:54:18 compute-0 systemd[1]: Started Ceph mgr.compute-0.stcefq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: pidfile_write: ignore empty --pid-file
Jan 31 05:54:18 compute-0 sudo[80430]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'alerts'
Jan 31 05:54:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 33c42f29-874b-415a-a5c5-fdc4717b3a74 (Updating mgr deployment (+1 -> 2))
Jan 31 05:54:18 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 33c42f29-874b-415a-a5c5-fdc4717b3a74 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 31 05:54:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ansible-async_wrapper.py[79514]: Done in kid B.
Jan 31 05:54:18 compute-0 python3[80791]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:18 compute-0 sudo[80813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:54:18 compute-0 sudo[80813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:18 compute-0 sudo[80813]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:18 compute-0 sudo[80844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:18 compute-0 sudo[80844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.38478147 +0000 UTC m=+0.046766188 container create c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:54:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:18 compute-0 sudo[80844]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'balancer'
Jan 31 05:54:18 compute-0 systemd[1]: Started libpod-conmon-c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431.scope.
Jan 31 05:54:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:18 compute-0 sudo[80876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:54:18 compute-0 sudo[80876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b11a6a04026d18055f52f2b88cc88d0c95a41bac11f6390b22954b98ff00b9a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b11a6a04026d18055f52f2b88cc88d0c95a41bac11f6390b22954b98ff00b9a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b11a6a04026d18055f52f2b88cc88d0c95a41bac11f6390b22954b98ff00b9a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.361840197 +0000 UTC m=+0.023824955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:18 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'cephadm'
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.480219099 +0000 UTC m=+0.142203837 container init c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.485415881 +0000 UTC m=+0.147400619 container start c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.517065449 +0000 UTC m=+0.179050187 container attach c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:54:18 compute-0 podman[80968]: 2026-01-31 05:54:18.798903322 +0000 UTC m=+0.050558680 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2902216644' entity='client.admin' 
Jan 31 05:54:18 compute-0 systemd[1]: libpod-c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431.scope: Deactivated successfully.
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.901025076 +0000 UTC m=+0.563009794 container died c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1085691184' entity='client.admin' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:18 compute-0 ceph-mon[75251]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2902216644' entity='client.admin' 
Jan 31 05:54:18 compute-0 podman[80968]: 2026-01-31 05:54:18.914495387 +0000 UTC m=+0.166150725 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:54:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b11a6a04026d18055f52f2b88cc88d0c95a41bac11f6390b22954b98ff00b9a-merged.mount: Deactivated successfully.
Jan 31 05:54:18 compute-0 podman[80836]: 2026-01-31 05:54:18.947521923 +0000 UTC m=+0.609506641 container remove c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431 (image=quay.io/ceph/ceph:v20, name=agitated_wiles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:18 compute-0 systemd[1]: libpod-conmon-c9db75f77880054e7fe9ab67a640525e41f2fe4e756966d7f112b21fde6ef431.scope: Deactivated successfully.
Jan 31 05:54:18 compute-0 sudo[80788]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:19 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'crash'
Jan 31 05:54:19 compute-0 sudo[81112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irzzwtjcrsqkluyeliczcqotxczotyfm ; /usr/bin/python3'
Jan 31 05:54:19 compute-0 sudo[81112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:19 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'dashboard'
Jan 31 05:54:19 compute-0 sudo[80876]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:19 compute-0 python3[81118]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:19 compute-0 sudo[81132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:54:19 compute-0 sudo[81132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:19 compute-0 sudo[81132]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 05:54:19 compute-0 podman[81155]: 2026-01-31 05:54:19.354667051 +0000 UTC m=+0.034120515 container create 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 05:54:19 compute-0 systemd[1]: Started libpod-conmon-6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb.scope.
Jan 31 05:54:19 compute-0 sudo[81166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:19 compute-0 sudo[81166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:19 compute-0 sudo[81166]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf5f3cbf033474d00846e6d546e124b61e0cd2b165e5793d3765ee5a44bca80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf5f3cbf033474d00846e6d546e124b61e0cd2b165e5793d3765ee5a44bca80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf5f3cbf033474d00846e6d546e124b61e0cd2b165e5793d3765ee5a44bca80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:19 compute-0 podman[81155]: 2026-01-31 05:54:19.342220865 +0000 UTC m=+0.021674349 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:19 compute-0 podman[81155]: 2026-01-31 05:54:19.440902678 +0000 UTC m=+0.120356192 container init 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 05:54:19 compute-0 podman[81155]: 2026-01-31 05:54:19.446749903 +0000 UTC m=+0.126203377 container start 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:54:19 compute-0 sudo[81201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:19 compute-0 podman[81155]: 2026-01-31 05:54:19.449950465 +0000 UTC m=+0.129403949 container attach 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:19 compute-0 sudo[81201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.759684083 +0000 UTC m=+0.041030187 container create f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:54:19 compute-0 systemd[1]: Started libpod-conmon-f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b.scope.
Jan 31 05:54:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.815697893 +0000 UTC m=+0.097044017 container init f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.821909681 +0000 UTC m=+0.103255775 container start f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.824784671 +0000 UTC m=+0.106130775 container attach f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:19 compute-0 vigorous_neumann[81276]: 167 167
Jan 31 05:54:19 compute-0 systemd[1]: libpod-f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b.scope: Deactivated successfully.
Jan 31 05:54:19 compute-0 conmon[81276]: conmon f075e61a5ed6365fdd05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b.scope/container/memory.events
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/422171114' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.827916101 +0000 UTC m=+0.109262235 container died f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.740711159 +0000 UTC m=+0.022057303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f215af877f683e6583c4f7f58d9350b8846c5bcae77a5ad4b42461d5f691ecb7-merged.mount: Deactivated successfully.
Jan 31 05:54:19 compute-0 podman[81260]: 2026-01-31 05:54:19.868088247 +0000 UTC m=+0.149434341 container remove f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b (image=quay.io/ceph/ceph:v20, name=vigorous_neumann, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:19 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'devicehealth'
Jan 31 05:54:19 compute-0 systemd[1]: libpod-conmon-f075e61a5ed6365fdd0545c74415aa7f663388dab3027fa779bc72c957791c5b.scope: Deactivated successfully.
Jan 31 05:54:19 compute-0 sudo[81201]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vavqfa (unknown last config time)...
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vavqfa (unknown last config time)...
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vavqfa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.vavqfa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vavqfa on compute-0
Jan 31 05:54:19 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vavqfa on compute-0
Jan 31 05:54:19 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 05:54:19 compute-0 sudo[81295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:19 compute-0 sudo[81295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:19 compute-0 sudo[81295]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:20 compute-0 sudo[81320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:20 compute-0 sudo[81320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:20 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq[80766]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 05:54:20 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq[80766]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 05:54:20 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq[80766]:   from numpy import show_config as show_numpy_config
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'influx'
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'insights'
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'iostat'
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/422171114' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.vavqfa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mgr services"} : dispatch
Jan 31 05:54:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.277332368 +0000 UTC m=+0.036341273 container create 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/422171114' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 05:54:20 compute-0 recursing_mahavira[81197]: set require_min_compat_client to mimic
Jan 31 05:54:20 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 05:54:20 compute-0 systemd[1]: Started libpod-conmon-58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8.scope.
Jan 31 05:54:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:20 compute-0 systemd[1]: libpod-6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb.scope: Deactivated successfully.
Jan 31 05:54:20 compute-0 podman[81155]: 2026-01-31 05:54:20.318221609 +0000 UTC m=+0.997675093 container died 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:54:20 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 2 completed events
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:54:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'k8sevents'
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.338668355 +0000 UTC m=+0.097677330 container init 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-edf5f3cbf033474d00846e6d546e124b61e0cd2b165e5793d3765ee5a44bca80-merged.mount: Deactivated successfully.
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.344903303 +0000 UTC m=+0.103912248 container start 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:20 compute-0 naughty_payne[81380]: 167 167
Jan 31 05:54:20 compute-0 systemd[1]: libpod-58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8.scope: Deactivated successfully.
Jan 31 05:54:20 compute-0 conmon[81380]: conmon 58f7e0a5a076040b0590 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8.scope/container/memory.events
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.357746512 +0000 UTC m=+0.116755457 container attach 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.262448087 +0000 UTC m=+0.021457012 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.358728057 +0000 UTC m=+0.117737002 container died 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:54:20 compute-0 podman[81155]: 2026-01-31 05:54:20.376986476 +0000 UTC m=+1.056439960 container remove 6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb (image=quay.io/ceph/ceph:v20, name=recursing_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-33bee479253d5e8534ba91e7fe4532fd53d30ded9f6b59235fa8a461760657b2-merged.mount: Deactivated successfully.
Jan 31 05:54:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:20 compute-0 sudo[81112]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:20 compute-0 podman[81362]: 2026-01-31 05:54:20.405387149 +0000 UTC m=+0.164396054 container remove 58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8 (image=quay.io/ceph/ceph:v20, name=naughty_payne, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:20 compute-0 systemd[1]: libpod-conmon-58f7e0a5a076040b05904167e0a89b65d147f4026ef61322866b459b002d00e8.scope: Deactivated successfully.
Jan 31 05:54:20 compute-0 systemd[1]: libpod-conmon-6a1b6bbbc13c0a17b64f7a2fff43ee00205a3cacb076e94ebe306a6e37acdabb.scope: Deactivated successfully.
Jan 31 05:54:20 compute-0 sudo[81320]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:20 compute-0 sudo[81408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:20 compute-0 sudo[81408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:20 compute-0 sudo[81408]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:20 compute-0 sudo[81433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:54:20 compute-0 sudo[81433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'localpool'
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 05:54:20 compute-0 sudo[81492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-besvtwdouktvbbnxdetedbbyqbwvoqey ; /usr/bin/python3'
Jan 31 05:54:20 compute-0 sudo[81492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:20 compute-0 python3[81496]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:20 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'mirroring'
Jan 31 05:54:21 compute-0 podman[81526]: 2026-01-31 05:54:21.008223995 +0000 UTC m=+0.095169101 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:54:21 compute-0 podman[81533]: 2026-01-31 05:54:20.944282528 +0000 UTC m=+0.017114540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:21 compute-0 podman[81533]: 2026-01-31 05:54:21.055250201 +0000 UTC m=+0.128082193 container create 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'nfs'
Jan 31 05:54:21 compute-0 systemd[1]: Started libpod-conmon-7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694.scope.
Jan 31 05:54:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f721310aaee52b52d5f8cb2734246b06b2e0cfe4940b61033f978b33774576/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f721310aaee52b52d5f8cb2734246b06b2e0cfe4940b61033f978b33774576/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f721310aaee52b52d5f8cb2734246b06b2e0cfe4940b61033f978b33774576/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:21 compute-0 podman[81558]: 2026-01-31 05:54:21.220725412 +0000 UTC m=+0.131649848 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:54:21 compute-0 podman[81533]: 2026-01-31 05:54:21.228540325 +0000 UTC m=+0.301372337 container init 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:54:21 compute-0 podman[81533]: 2026-01-31 05:54:21.235142416 +0000 UTC m=+0.307974438 container start 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'orchestrator'
Jan 31 05:54:21 compute-0 podman[81526]: 2026-01-31 05:54:21.307961995 +0000 UTC m=+0.394907051 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:21 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:21 compute-0 ceph-mon[75251]: Reconfiguring mgr.compute-0.vavqfa (unknown last config time)...
Jan 31 05:54:21 compute-0 ceph-mon[75251]: Reconfiguring daemon mgr.compute-0.vavqfa on compute-0
Jan 31 05:54:21 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/422171114' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 05:54:21 compute-0 ceph-mon[75251]: osdmap e3: 0 total, 0 up, 0 in
Jan 31 05:54:21 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 ceph-mon[75251]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:21 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 podman[81533]: 2026-01-31 05:54:21.349985865 +0000 UTC m=+0.422817947 container attach 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'osd_support'
Jan 31 05:54:21 compute-0 sudo[81433]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:21 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 05:54:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:21 compute-0 sudo[81678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:21 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:21 compute-0 sudo[81678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:21 compute-0 sudo[81678]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:21 compute-0 sudo[81703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 05:54:21 compute-0 sudo[81703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'progress'
Jan 31 05:54:21 compute-0 sudo[81711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:54:21 compute-0 sudo[81711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:21 compute-0 sudo[81711]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:21 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'prometheus'
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:21 compute-0 sudo[81703]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 05:54:22 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'rbd_support'
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO root] Added host compute-0
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'rgw'
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev fceedf71-d517-4d38-add6-decb9242d1eb (Updating mgr deployment (-1 -> 1))
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.stcefq from compute-0 -- ports [8765]
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.stcefq from compute-0 -- ports [8765]
Jan 31 05:54:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 hopeful_feynman[81572]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 05:54:22 compute-0 hopeful_feynman[81572]: Scheduled mon update...
Jan 31 05:54:22 compute-0 hopeful_feynman[81572]: Scheduled mgr update...
Jan 31 05:54:22 compute-0 hopeful_feynman[81572]: Scheduled osd.default_drive_group update...
Jan 31 05:54:22 compute-0 systemd[1]: libpod-7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694.scope: Deactivated successfully.
Jan 31 05:54:22 compute-0 sudo[81773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:22 compute-0 sudo[81773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:22 compute-0 sudo[81773]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:22 compute-0 podman[81798]: 2026-01-31 05:54:22.376478947 +0000 UTC m=+0.018679235 container died 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:22 compute-0 sudo[81805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --name mgr.compute-0.stcefq --force --tcp-ports 8765
Jan 31 05:54:22 compute-0 sudo[81805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:22 compute-0 ceph-mgr[80792]: mgr[py] Loading python module 'rook'
Jan 31 05:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f721310aaee52b52d5f8cb2734246b06b2e0cfe4940b61033f978b33774576-merged.mount: Deactivated successfully.
Jan 31 05:54:22 compute-0 podman[81798]: 2026-01-31 05:54:22.62575032 +0000 UTC m=+0.267950568 container remove 7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694 (image=quay.io/ceph/ceph:v20, name=hopeful_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:54:22 compute-0 systemd[1]: libpod-conmon-7ecda6f4b6371cdde2a342e720ca91df337fe09a9ebf6f955f1e045e4d215694.scope: Deactivated successfully.
Jan 31 05:54:22 compute-0 sudo[81492]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Added host compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Saving service mon spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Saving service mgr spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Saving service osd.default_drive_group spec with placement compute-0
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: Removing daemon mgr.compute-0.stcefq from compute-0 -- ports [8765]
Jan 31 05:54:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:22 compute-0 ceph-mon[75251]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:22 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.stcefq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:22 compute-0 podman[81880]: 2026-01-31 05:54:22.827369596 +0000 UTC m=+0.051024087 container died 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:22 compute-0 sudo[81918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjccnxfwzurazgvyxyskkmlsvpxltsze ; /usr/bin/python3'
Jan 31 05:54:22 compute-0 sudo[81918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8775b4f6fbe7a76ab84860585351c18bb156189890184befe3fdc930d9477e96-merged.mount: Deactivated successfully.
Jan 31 05:54:22 compute-0 podman[81880]: 2026-01-31 05:54:22.935655925 +0000 UTC m=+0.159310416 container remove 47d156703a67411782e45ac1248565ad948394e4331047b96c809f59fddd38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:22 compute-0 bash[81880]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-stcefq
Jan 31 05:54:22 compute-0 systemd[1]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mgr.compute-0.stcefq.service: Main process exited, code=exited, status=143/n/a
Jan 31 05:54:22 compute-0 python3[81930]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.030463943 +0000 UTC m=+0.042674414 container create b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:23 compute-0 systemd[1]: Started libpod-conmon-b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a.scope.
Jan 31 05:54:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191b38b53529fc2a18f5e045199a346b28709021820bd1bcd7f2ab151c49a14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191b38b53529fc2a18f5e045199a346b28709021820bd1bcd7f2ab151c49a14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191b38b53529fc2a18f5e045199a346b28709021820bd1bcd7f2ab151c49a14/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.008177723 +0000 UTC m=+0.020388214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.119163777 +0000 UTC m=+0.131374298 container init b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.125280171 +0000 UTC m=+0.137490642 container start b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.131659385 +0000 UTC m=+0.143869896 container attach b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:23 compute-0 systemd[1]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mgr.compute-0.stcefq.service: Failed with result 'exit-code'.
Jan 31 05:54:23 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.stcefq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:23 compute-0 systemd[1]: ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mgr.compute-0.stcefq.service: Consumed 5.349s CPU time, 350.6M memory peak, read 0B from disk, written 158.0K to disk.
Jan 31 05:54:23 compute-0 systemd[1]: Reloading.
Jan 31 05:54:23 compute-0 systemd-rc-local-generator[82018]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:23 compute-0 systemd-sysv-generator[82026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:23 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:23 compute-0 sudo[81805]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:23 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.stcefq
Jan 31 05:54:23 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.stcefq
Jan 31 05:54:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.stcefq"} v 0)
Jan 31 05:54:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.stcefq"} : dispatch
Jan 31 05:54:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.stcefq"}]': finished
Jan 31 05:54:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:23 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev fceedf71-d517-4d38-add6-decb9242d1eb (Updating mgr deployment (-1 -> 1))
Jan 31 05:54:23 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event fceedf71-d517-4d38-add6-decb9242d1eb (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 31 05:54:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 05:54:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:23 compute-0 sudo[82045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:54:23 compute-0 sudo[82045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:23 compute-0 sudo[82045]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:23 compute-0 sudo[82070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:23 compute-0 sudo[82070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:23 compute-0 sudo[82070]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:23 compute-0 sudo[82095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:54:23 compute-0 sudo[82095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 05:54:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402390780' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:54:23 compute-0 reverent_meninsky[81976]: 
Jan 31 05:54:23 compute-0 reverent_meninsky[81976]: {"fsid":"797ee2fc-ca49-5eee-87c0-542bb035a7d7","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":56,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T05:53:24:897973+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T05:53:24.900966+0000","services":{}},"progress_events":{"fceedf71-d517-4d38-add6-decb9242d1eb":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 31 05:54:23 compute-0 systemd[1]: libpod-b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a.scope: Deactivated successfully.
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.681558487 +0000 UTC m=+0.693768978 container died b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8191b38b53529fc2a18f5e045199a346b28709021820bd1bcd7f2ab151c49a14-merged.mount: Deactivated successfully.
Jan 31 05:54:23 compute-0 podman[81942]: 2026-01-31 05:54:23.839237815 +0000 UTC m=+0.851448276 container remove b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a (image=quay.io/ceph/ceph:v20, name=reverent_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:23 compute-0 systemd[1]: libpod-conmon-b44b04a87952f0fde79f2ddf71868bd16c682449e81cc13f97b56abeac270b3a.scope: Deactivated successfully.
Jan 31 05:54:23 compute-0 sudo[81918]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:24 compute-0 podman[82179]: 2026-01-31 05:54:24.025316287 +0000 UTC m=+0.073743752 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:54:24 compute-0 podman[82179]: 2026-01-31 05:54:24.105374369 +0000 UTC m=+0.153801804 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: Removing key for mgr.compute-0.stcefq
Jan 31 05:54:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.stcefq"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.stcefq"}]': finished
Jan 31 05:54:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1402390780' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:24 compute-0 sudo[82095]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:54:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:24 compute-0 sudo[82274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:24 compute-0 sudo[82274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:24 compute-0 sudo[82274]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:24 compute-0 sudo[82299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:54:24 compute-0 sudo[82299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.008232884 +0000 UTC m=+0.064023301 container create 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:54:25 compute-0 systemd[1]: Started libpod-conmon-89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd.scope.
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:24.965651414 +0000 UTC m=+0.021441821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.085763827 +0000 UTC m=+0.141554314 container init 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.090841765 +0000 UTC m=+0.146632152 container start 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.094001546 +0000 UTC m=+0.149791973 container attach 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:25 compute-0 recursing_lumiere[82352]: 167 167
Jan 31 05:54:25 compute-0 systemd[1]: libpod-89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd.scope: Deactivated successfully.
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.097756407 +0000 UTC m=+0.153546824 container died 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-50456afd39cef0a83ab470c410ee5d069091f6ee5b7689bcc8c9553260138cfd-merged.mount: Deactivated successfully.
Jan 31 05:54:25 compute-0 podman[82336]: 2026-01-31 05:54:25.135224478 +0000 UTC m=+0.191014885 container remove 89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_lumiere, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:25 compute-0 systemd[1]: libpod-conmon-89d81f092241b51291d00c6d476b6141890c3144954e737f6817875294571fdd.scope: Deactivated successfully.
Jan 31 05:54:25 compute-0 podman[82376]: 2026-01-31 05:54:25.303178926 +0000 UTC m=+0.052105375 container create c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:25 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:25 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 3 completed events
Jan 31 05:54:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:54:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 systemd[1]: Started libpod-conmon-c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32.scope.
Jan 31 05:54:25 compute-0 podman[82376]: 2026-01-31 05:54:25.282819753 +0000 UTC m=+0.031746292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:25 compute-0 podman[82376]: 2026-01-31 05:54:25.419927511 +0000 UTC m=+0.168854000 container init c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:54:25 compute-0 podman[82376]: 2026-01-31 05:54:25.436580724 +0000 UTC m=+0.185507203 container start c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:25 compute-0 podman[82376]: 2026-01-31 05:54:25.440836483 +0000 UTC m=+0.189762972 container attach c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:54:25 compute-0 ceph-mon[75251]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:26 compute-0 clever_wu[82393]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fccfda9b-7473-4dd5-8d91-930b3f0aef0b
Jan 31 05:54:26 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b"} v 0)
Jan 31 05:54:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/258506690' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b"} : dispatch
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/258506690' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b"}]': finished
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 05:54:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:26 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:26 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:26 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/258506690' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b"} : dispatch
Jan 31 05:54:26 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/258506690' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b"}]': finished
Jan 31 05:54:26 compute-0 ceph-mon[75251]: osdmap e4: 1 total, 0 up, 1 in
Jan 31 05:54:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 05:54:26 compute-0 lvm[82485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:26 compute-0 lvm[82485]: VG ceph_vg0 finished
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:26 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 05:54:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 05:54:27 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937757716' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:27 compute-0 clever_wu[82393]:  stderr: got monmap epoch 1
Jan 31 05:54:27 compute-0 clever_wu[82393]: --> Creating keyring file for osd.0
Jan 31 05:54:27 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 05:54:27 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 05:54:27 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid fccfda9b-7473-4dd5-8d91-930b3f0aef0b --setuser ceph --setgroup ceph
Jan 31 05:54:27 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:27 compute-0 ceph-mon[75251]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:27 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/937757716' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:28 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:28 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:27.239+0000 7f9f081e08c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 31 05:54:28 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:27.264+0000 7f9f081e08c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 05:54:28 compute-0 clever_wu[82393]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:28 compute-0 clever_wu[82393]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 05:54:28 compute-0 clever_wu[82393]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:28 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 73952b02-2c42-4313-9a4d-6daccff98410
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 05:54:28 compute-0 ceph-mon[75251]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:28 compute-0 ceph-mon[75251]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 05:54:28 compute-0 ceph-mon[75251]: Cluster is now healthy
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "73952b02-2c42-4313-9a4d-6daccff98410"} v 0)
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904490458' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "73952b02-2c42-4313-9a4d-6daccff98410"} : dispatch
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/904490458' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73952b02-2c42-4313-9a4d-6daccff98410"}]': finished
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:28 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:28 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:28 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:29 compute-0 lvm[83420]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:54:29 compute-0 lvm[83420]: VG ceph_vg1 finished
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 05:54:29 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 05:54:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973046431' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:29 compute-0 clever_wu[82393]:  stderr: got monmap epoch 1
Jan 31 05:54:29 compute-0 clever_wu[82393]: --> Creating keyring file for osd.1
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 05:54:29 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 73952b02-2c42-4313-9a4d-6daccff98410 --setuser ceph --setgroup ceph
Jan 31 05:54:29 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/904490458' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "73952b02-2c42-4313-9a4d-6daccff98410"} : dispatch
Jan 31 05:54:29 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/904490458' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "73952b02-2c42-4313-9a4d-6daccff98410"}]': finished
Jan 31 05:54:29 compute-0 ceph-mon[75251]: osdmap e5: 2 total, 0 up, 2 in
Jan 31 05:54:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:29 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1973046431' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:30 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:30 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:29.601+0000 7f163fce28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 31 05:54:30 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:29.632+0000 7f163fce28c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 05:54:30 compute-0 clever_wu[82393]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:30 compute-0 clever_wu[82393]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 05:54:30 compute-0 clever_wu[82393]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:30 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9c925ba-bcb8-4095-8064-de1a4f48f42c
Jan 31 05:54:30 compute-0 ceph-mon[75251]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c"} v 0)
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622899060' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c"} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1622899060' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c"}]': finished
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:31 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:31 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:31 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:31 compute-0 lvm[84358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:54:31 compute-0 lvm[84358]: VG ceph_vg2 finished
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 05:54:31 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 05:54:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400843980' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:31 compute-0 clever_wu[82393]:  stderr: got monmap epoch 1
Jan 31 05:54:31 compute-0 clever_wu[82393]: --> Creating keyring file for osd.2
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1622899060' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c"} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1622899060' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c"}]': finished
Jan 31 05:54:31 compute-0 ceph-mon[75251]: osdmap e6: 3 total, 0 up, 3 in
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:31 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2400843980' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 05:54:31 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid b9c925ba-bcb8-4095-8064-de1a4f48f42c --setuser ceph --setgroup ceph
Jan 31 05:54:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:32 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:32 compute-0 ceph-mon[75251]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:33 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:33 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:31.879+0000 7f70b7b5d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 31 05:54:33 compute-0 clever_wu[82393]:  stderr: 2026-01-31T05:54:31.901+0000 7f70b7b5d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 05:54:33 compute-0 clever_wu[82393]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 05:54:33 compute-0 clever_wu[82393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:33 compute-0 clever_wu[82393]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 05:54:33 compute-0 clever_wu[82393]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 31 05:54:33 compute-0 systemd[1]: libpod-c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32.scope: Deactivated successfully.
Jan 31 05:54:33 compute-0 systemd[1]: libpod-c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32.scope: Consumed 5.076s CPU time.
Jan 31 05:54:33 compute-0 conmon[82393]: conmon c943133525df3ad66e43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32.scope/container/memory.events
Jan 31 05:54:33 compute-0 podman[85265]: 2026-01-31 05:54:33.604348232 +0000 UTC m=+0.024371643 container died c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb1cb94415c150b0496b097bf212cff14145247e081992fcb9503ce136771c4-merged.mount: Deactivated successfully.
Jan 31 05:54:33 compute-0 podman[85265]: 2026-01-31 05:54:33.892231297 +0000 UTC m=+0.312254728 container remove c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:54:33 compute-0 systemd[1]: libpod-conmon-c943133525df3ad66e43cd84e28a081fcc10145d2a27d95a53594eacbe5c7d32.scope: Deactivated successfully.
Jan 31 05:54:33 compute-0 sudo[82299]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:34 compute-0 sudo[85279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:34 compute-0 sudo[85279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:34 compute-0 sudo[85279]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:34 compute-0 sudo[85304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:54:34 compute-0 sudo[85304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.33525502 +0000 UTC m=+0.056902233 container create 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:54:34 compute-0 systemd[1]: Started libpod-conmon-3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f.scope.
Jan 31 05:54:34 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.302564316 +0000 UTC m=+0.024211559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.418866886 +0000 UTC m=+0.140514129 container init 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.426841665 +0000 UTC m=+0.148488878 container start 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:34 compute-0 systemd[1]: libpod-3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f.scope: Deactivated successfully.
Jan 31 05:54:34 compute-0 optimistic_germain[85357]: 167 167
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.432041227 +0000 UTC m=+0.153688460 container attach 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.432744311 +0000 UTC m=+0.154391524 container died 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8d545bf2c128a0d088527509ba87ba92e7721f1cb9639597af22eaf77c770bd-merged.mount: Deactivated successfully.
Jan 31 05:54:34 compute-0 podman[85341]: 2026-01-31 05:54:34.486617997 +0000 UTC m=+0.208265220 container remove 3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:34 compute-0 systemd[1]: libpod-conmon-3cea7635cc56a80800c5d273e2c631caba1ffbfc5f25fc98192303b6e578167f.scope: Deactivated successfully.
Jan 31 05:54:34 compute-0 podman[85380]: 2026-01-31 05:54:34.640822943 +0000 UTC m=+0.050385454 container create e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:54:34 compute-0 systemd[1]: Started libpod-conmon-e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8.scope.
Jan 31 05:54:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef65a9d640803fc15559ccf8c21a8a6a8fb6c2cfc0c96ab1bdca1c73c38bfa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef65a9d640803fc15559ccf8c21a8a6a8fb6c2cfc0c96ab1bdca1c73c38bfa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef65a9d640803fc15559ccf8c21a8a6a8fb6c2cfc0c96ab1bdca1c73c38bfa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef65a9d640803fc15559ccf8c21a8a6a8fb6c2cfc0c96ab1bdca1c73c38bfa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:34 compute-0 podman[85380]: 2026-01-31 05:54:34.615990434 +0000 UTC m=+0.025553005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:34 compute-0 podman[85380]: 2026-01-31 05:54:34.736601075 +0000 UTC m=+0.146163636 container init e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:54:34 compute-0 podman[85380]: 2026-01-31 05:54:34.744736879 +0000 UTC m=+0.154299400 container start e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:34 compute-0 podman[85380]: 2026-01-31 05:54:34.748415188 +0000 UTC m=+0.157977709 container attach e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]: {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     "0": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "devices": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "/dev/loop3"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             ],
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_name": "ceph_lv0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_size": "21470642176",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "name": "ceph_lv0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "tags": {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_name": "ceph",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.crush_device_class": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.encrypted": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.objectstore": "bluestore",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_id": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.vdo": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.with_tpm": "0"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             },
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "vg_name": "ceph_vg0"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         }
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     ],
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     "1": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "devices": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "/dev/loop4"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             ],
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_name": "ceph_lv1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_size": "21470642176",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "name": "ceph_lv1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "tags": {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_name": "ceph",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.crush_device_class": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.encrypted": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.objectstore": "bluestore",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_id": "1",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.vdo": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.with_tpm": "0"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             },
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "vg_name": "ceph_vg1"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         }
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     ],
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     "2": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "devices": [
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "/dev/loop5"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             ],
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_name": "ceph_lv2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_size": "21470642176",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "name": "ceph_lv2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "tags": {
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.cluster_name": "ceph",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.crush_device_class": "",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.encrypted": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.objectstore": "bluestore",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osd_id": "2",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.vdo": "0",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:                 "ceph.with_tpm": "0"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             },
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "type": "block",
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:             "vg_name": "ceph_vg2"
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:         }
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]:     ]
Jan 31 05:54:35 compute-0 pedantic_shirley[85396]: }
Jan 31 05:54:35 compute-0 systemd[1]: libpod-e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8.scope: Deactivated successfully.
Jan 31 05:54:35 compute-0 podman[85380]: 2026-01-31 05:54:35.040661625 +0000 UTC m=+0.450224146 container died e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef65a9d640803fc15559ccf8c21a8a6a8fb6c2cfc0c96ab1bdca1c73c38bfa3-merged.mount: Deactivated successfully.
Jan 31 05:54:35 compute-0 podman[85380]: 2026-01-31 05:54:35.250085824 +0000 UTC m=+0.659648345 container remove e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:35 compute-0 systemd[1]: libpod-conmon-e8980108f98273a94776931e8591be257bf9615c056621b0412bf93b225c6ab8.scope: Deactivated successfully.
Jan 31 05:54:35 compute-0 sudo[85304]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 31 05:54:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 05:54:35 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:35 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:35 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 05:54:35 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 05:54:35 compute-0 sudo[85419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:35 compute-0 sudo[85419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:35 compute-0 sudo[85419]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:35 compute-0 sudo[85444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:35 compute-0 sudo[85444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:35 compute-0 ceph-mon[75251]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 05:54:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.831723749 +0000 UTC m=+0.038361224 container create 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:54:35 compute-0 systemd[1]: Started libpod-conmon-53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234.scope.
Jan 31 05:54:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.813346245 +0000 UTC m=+0.019983730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.912836247 +0000 UTC m=+0.119473732 container init 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.918589108 +0000 UTC m=+0.125226593 container start 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:35 compute-0 reverent_goodall[85528]: 167 167
Jan 31 05:54:35 compute-0 systemd[1]: libpod-53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234.scope: Deactivated successfully.
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.924391501 +0000 UTC m=+0.131029026 container attach 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.924693352 +0000 UTC m=+0.131330827 container died 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc704f5b5624c12bdb59ea053c1dc0941ccadae2ab7aa669d33e3ab69513027f-merged.mount: Deactivated successfully.
Jan 31 05:54:35 compute-0 podman[85512]: 2026-01-31 05:54:35.967705297 +0000 UTC m=+0.174342772 container remove 53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goodall, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:35 compute-0 systemd[1]: libpod-conmon-53a8f50b28f0fc4f9c23bfbe8860f323ebda5c7a382a6e5afe73edf2f7246234.scope: Deactivated successfully.
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.210187953 +0000 UTC m=+0.065908688 container create 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.176809065 +0000 UTC m=+0.032529820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:36 compute-0 systemd[1]: Started libpod-conmon-5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e.scope.
Jan 31 05:54:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:36 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.410933498 +0000 UTC m=+0.266654233 container init 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.420539114 +0000 UTC m=+0.276259849 container start 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.511236468 +0000 UTC m=+0.366957203 container attach 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:36 compute-0 ceph-mon[75251]: Deploying daemon osd.0 on compute-0
Jan 31 05:54:36 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test[85577]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 05:54:36 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test[85577]:                             [--no-systemd] [--no-tmpfs]
Jan 31 05:54:36 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test[85577]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 05:54:36 compute-0 systemd[1]: libpod-5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e.scope: Deactivated successfully.
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.622334096 +0000 UTC m=+0.478054801 container died 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4625ba52bc5ca6e6c1af03e9e55e1615853da4dea74093890c1707c7b3adad8-merged.mount: Deactivated successfully.
Jan 31 05:54:36 compute-0 podman[85560]: 2026-01-31 05:54:36.668675208 +0000 UTC m=+0.524395923 container remove 5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:36 compute-0 systemd[1]: libpod-conmon-5a9fa54dc9731635e776c6bdf3ef85c8046baaee34216d5a1ce85ea5bf26c24e.scope: Deactivated successfully.
Jan 31 05:54:36 compute-0 systemd[1]: Reloading.
Jan 31 05:54:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:36 compute-0 systemd-sysv-generator[85643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:36 compute-0 systemd-rc-local-generator[85639]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:37 compute-0 systemd[1]: Reloading.
Jan 31 05:54:37 compute-0 systemd-rc-local-generator[85682]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:37 compute-0 systemd-sysv-generator[85685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:37 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:37 compute-0 systemd[1]: Starting Ceph osd.0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:37 compute-0 podman[85740]: 2026-01-31 05:54:37.539424798 +0000 UTC m=+0.048618142 container create 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:54:37 compute-0 ceph-mon[75251]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:37 compute-0 podman[85740]: 2026-01-31 05:54:37.522158444 +0000 UTC m=+0.031351748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:37 compute-0 podman[85740]: 2026-01-31 05:54:37.623417128 +0000 UTC m=+0.132610512 container init 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:54:37 compute-0 podman[85740]: 2026-01-31 05:54:37.630365541 +0000 UTC m=+0.139558855 container start 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:54:37 compute-0 podman[85740]: 2026-01-31 05:54:37.695722988 +0000 UTC m=+0.204916312 container attach 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:37 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:37 compute-0 bash[85740]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:37 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:37 compute-0 bash[85740]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:38 compute-0 lvm[85838]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:38 compute-0 lvm[85838]: VG ceph_vg0 finished
Jan 31 05:54:38 compute-0 lvm[85841]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:54:38 compute-0 lvm[85841]: VG ceph_vg1 finished
Jan 31 05:54:38 compute-0 lvm[85843]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:54:38 compute-0 lvm[85843]: VG ceph_vg2 finished
Jan 31 05:54:38 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:38 compute-0 bash[85740]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:38 compute-0 bash[85740]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 05:54:38 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate[85755]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 05:54:38 compute-0 bash[85740]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 05:54:38 compute-0 systemd[1]: libpod-4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e.scope: Deactivated successfully.
Jan 31 05:54:38 compute-0 systemd[1]: libpod-4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e.scope: Consumed 1.261s CPU time.
Jan 31 05:54:38 compute-0 podman[85938]: 2026-01-31 05:54:38.673233956 +0000 UTC m=+0.033186182 container died 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:38 compute-0 ceph-mon[75251]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be23342970187c210608839f420c3bab59d081ebfeeacc1d8017115469555cb-merged.mount: Deactivated successfully.
Jan 31 05:54:38 compute-0 podman[85938]: 2026-01-31 05:54:38.723720463 +0000 UTC m=+0.083672629 container remove 4265ce5b17a25840e69283fb22bd0465e5c5640ba64e25bf8ca6d14f4d356c3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:54:38 compute-0 podman[85997]: 2026-01-31 05:54:38.902392205 +0000 UTC m=+0.039487033 container create fcb48056ec9ae08f48981270b006e7f726e3d26b448aa36ed390f81e914466b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b576673939ad142a7c9760dc4a90600e2e78120ce92b3bba31c735834c96879e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b576673939ad142a7c9760dc4a90600e2e78120ce92b3bba31c735834c96879e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b576673939ad142a7c9760dc4a90600e2e78120ce92b3bba31c735834c96879e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b576673939ad142a7c9760dc4a90600e2e78120ce92b3bba31c735834c96879e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b576673939ad142a7c9760dc4a90600e2e78120ce92b3bba31c735834c96879e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:38 compute-0 podman[85997]: 2026-01-31 05:54:38.880970156 +0000 UTC m=+0.018065064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:39 compute-0 podman[85997]: 2026-01-31 05:54:39.010755927 +0000 UTC m=+0.147850785 container init fcb48056ec9ae08f48981270b006e7f726e3d26b448aa36ed390f81e914466b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:54:39 compute-0 podman[85997]: 2026-01-31 05:54:39.016106695 +0000 UTC m=+0.153201533 container start fcb48056ec9ae08f48981270b006e7f726e3d26b448aa36ed390f81e914466b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: pidfile_write: ignore empty --pid-file
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 bash[85997]: fcb48056ec9ae08f48981270b006e7f726e3d26b448aa36ed390f81e914466b4
Jan 31 05:54:39 compute-0 systemd[1]: Started Ceph osd.0 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 sudo[85444]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652202000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 05:54:39 compute-0 ceph-osd[86016]: load: jerasure load: lrc 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 31 05:54:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 05:54:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:39 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:39 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 31 05:54:39 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652203c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:39 compute-0 sudo[86059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount shared_bdev_used = 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:39 compute-0 sudo[86059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:39 compute-0 sudo[86059]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Git sha 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DB SUMMARY
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DB Session ID:  J73KQD297IB20VEUJJIJ
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                     Options.env: 0x55b652093ea0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                Options.info_log: 0x55b6530e68a0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.write_buffer_manager: 0x55b6520f8b40
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Compression algorithms supported:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2eae50c6-720e-4a85-8b13-0ad8208f8271
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879377460, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879379407, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: freelist init
Jan 31 05:54:39 compute-0 ceph-osd[86016]: freelist _read_cfg
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs umount
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 05:54:39 compute-0 sudo[86155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:39 compute-0 sudo[86155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bdev(0x55b652e99800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluefs mount shared_bdev_used = 27262976
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Git sha 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DB SUMMARY
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DB Session ID:  J73KQD297IB20VEUJJII
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                     Options.env: 0x55b652093ce0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                Options.info_log: 0x55b6530e6960
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.write_buffer_manager: 0x55b6520f8b40
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Compression algorithms supported:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e6bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b6520978d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b6530e70c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b652097a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2eae50c6-720e-4a85-8b13-0ad8208f8271
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879433145, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879437624, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2eae50c6-720e-4a85-8b13-0ad8208f8271", "db_session_id": "J73KQD297IB20VEUJJII", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879441858, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2eae50c6-720e-4a85-8b13-0ad8208f8271", "db_session_id": "J73KQD297IB20VEUJJII", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879444030, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838879, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2eae50c6-720e-4a85-8b13-0ad8208f8271", "db_session_id": "J73KQD297IB20VEUJJII", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838879445159, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b6532fe000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: DB pointer 0x55b6532a0000
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 05:54:39 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:54:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 05:54:39 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 05:54:39 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 05:54:39 compute-0 ceph-osd[86016]: _get_class not permitted to load lua
Jan 31 05:54:39 compute-0 ceph-osd[86016]: _get_class not permitted to load sdk
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 load_pgs
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 load_pgs opened 0 pgs
Jan 31 05:54:39 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0[86012]: 2026-01-31T05:54:39.467+0000 7f437e2578c0 -1 osd.0 0 log_to_monitors true
Jan 31 05:54:39 compute-0 ceph-osd[86016]: osd.0 0 log_to_monitors true
Jan 31 05:54:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 31 05:54:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.776432922 +0000 UTC m=+0.042939143 container create 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:39 compute-0 systemd[1]: Started libpod-conmon-93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81.scope.
Jan 31 05:54:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.756425252 +0000 UTC m=+0.022931543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.855289832 +0000 UTC m=+0.121796123 container init 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.864342029 +0000 UTC m=+0.130848260 container start 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 05:54:39 compute-0 kind_mcclintock[86575]: 167 167
Jan 31 05:54:39 compute-0 systemd[1]: libpod-93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81.scope: Deactivated successfully.
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.904008997 +0000 UTC m=+0.170515238 container attach 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.905217189 +0000 UTC m=+0.171723430 container died 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebe198d4df35afdff522ee926c1fbb3f408d546e6f46cae95d9ffeb913d6198a-merged.mount: Deactivated successfully.
Jan 31 05:54:39 compute-0 podman[86559]: 2026-01-31 05:54:39.971900303 +0000 UTC m=+0.238406514 container remove 93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:39 compute-0 systemd[1]: libpod-conmon-93d9e937a1efcffc0a8fdfb32d0ccea64d3dcd6bb116624653176f2f3985af81.scope: Deactivated successfully.
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.165608872 +0000 UTC m=+0.038070424 container create 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:40 compute-0 systemd[1]: Started libpod-conmon-20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e.scope.
Jan 31 05:54:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.242465381 +0000 UTC m=+0.114926933 container init 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.149327212 +0000 UTC m=+0.021788774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.246518943 +0000 UTC m=+0.118980485 container start 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.271908001 +0000 UTC m=+0.144369563 container attach 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: Deploying daemon osd.1 on compute-0
Jan 31 05:54:40 compute-0 ceph-mon[75251]: from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:40 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:40 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:40 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test[86622]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 05:54:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test[86622]:                             [--no-systemd] [--no-tmpfs]
Jan 31 05:54:40 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test[86622]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 05:54:40 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:40 compute-0 systemd[1]: libpod-20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e.scope: Deactivated successfully.
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.412203021 +0000 UTC m=+0.284664563 container died 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68c298d8fe70cbbe74f20d81d253ac9602ce0811e873ef465ab858d8b5e58e2-merged.mount: Deactivated successfully.
Jan 31 05:54:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 05:54:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 05:54:40 compute-0 podman[86606]: 2026-01-31 05:54:40.610796891 +0000 UTC m=+0.483258423 container remove 20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:54:40 compute-0 systemd[1]: libpod-conmon-20224f1f8c224443ab424e2078b13bac00d16e1ea3a8d547a9e0a5f2d376541e.scope: Deactivated successfully.
Jan 31 05:54:40 compute-0 systemd[1]: Reloading.
Jan 31 05:54:40 compute-0 systemd-rc-local-generator[86680]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:40 compute-0 systemd-sysv-generator[86685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:41 compute-0 systemd[1]: Reloading.
Jan 31 05:54:41 compute-0 systemd-sysv-generator[86724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:41 compute-0 systemd-rc-local-generator[86720]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:41 compute-0 systemd[1]: Starting Ceph osd.1 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 done with init, starting boot process
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 start_boot
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 05:54:41 compute-0 ceph-osd[86016]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 05:54:41 compute-0 ceph-mon[75251]: from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 05:54:41 compute-0 ceph-mon[75251]: osdmap e7: 3 total, 0 up, 3 in
Jan 31 05:54:41 compute-0 ceph-mon[75251]: from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:41 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:41 compute-0 podman[86783]: 2026-01-31 05:54:41.452587878 +0000 UTC m=+0.041490403 container create 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 31 05:54:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:41 compute-0 podman[86783]: 2026-01-31 05:54:41.434772555 +0000 UTC m=+0.023675100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:41 compute-0 podman[86783]: 2026-01-31 05:54:41.548166803 +0000 UTC m=+0.137069358 container init 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:41 compute-0 podman[86783]: 2026-01-31 05:54:41.551941135 +0000 UTC m=+0.140843670 container start 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:54:41 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:41 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:41 compute-0 bash[86783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:41 compute-0 bash[86783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:41 compute-0 podman[86783]: 2026-01-31 05:54:41.783594982 +0000 UTC m=+0.372497547 container attach 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:54:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:42 compute-0 lvm[86881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:42 compute-0 lvm[86881]: VG ceph_vg0 finished
Jan 31 05:54:42 compute-0 lvm[86884]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:54:42 compute-0 lvm[86884]: VG ceph_vg1 finished
Jan 31 05:54:42 compute-0 lvm[86886]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:54:42 compute-0 lvm[86886]: VG ceph_vg2 finished
Jan 31 05:54:42 compute-0 lvm[86887]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:42 compute-0 lvm[86887]: VG ceph_vg0 finished
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:42 compute-0 bash[86783]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:42 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:42 compute-0 ceph-mon[75251]: from='osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:42 compute-0 ceph-mon[75251]: osdmap e8: 3 total, 0 up, 3 in
Jan 31 05:54:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:42 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 05:54:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:42 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:42 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:42 compute-0 bash[86783]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 05:54:42 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate[86798]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 05:54:42 compute-0 bash[86783]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 05:54:42 compute-0 systemd[1]: libpod-5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247.scope: Deactivated successfully.
Jan 31 05:54:42 compute-0 systemd[1]: libpod-5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247.scope: Consumed 1.169s CPU time.
Jan 31 05:54:42 compute-0 podman[86993]: 2026-01-31 05:54:42.554870313 +0000 UTC m=+0.020080044 container died 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-01ab95af7e049115d0b4ad293d01969b09765550f1000bb92262de2b0698d6d2-merged.mount: Deactivated successfully.
Jan 31 05:54:42 compute-0 podman[86993]: 2026-01-31 05:54:42.759763222 +0000 UTC m=+0.224972943 container remove 5c426b469e9e75ef47432b1e416978d7c2b8b820e53b9cb290fd66bfe19ab247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:54:42 compute-0 podman[87051]: 2026-01-31 05:54:42.962358262 +0000 UTC m=+0.080829529 container create 992c3eed0bdda9df89ade1177ea51271da3e6bc188b8a5927747a30c0bd96cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:42 compute-0 podman[87051]: 2026-01-31 05:54:42.902186067 +0000 UTC m=+0.020657354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ec36cc386910e897b2f5bd7acd1f6794e72810a17cb7121563f393f319bbfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ec36cc386910e897b2f5bd7acd1f6794e72810a17cb7121563f393f319bbfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ec36cc386910e897b2f5bd7acd1f6794e72810a17cb7121563f393f319bbfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ec36cc386910e897b2f5bd7acd1f6794e72810a17cb7121563f393f319bbfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ec36cc386910e897b2f5bd7acd1f6794e72810a17cb7121563f393f319bbfc/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:43 compute-0 podman[87051]: 2026-01-31 05:54:43.283323325 +0000 UTC m=+0.401794662 container init 992c3eed0bdda9df89ade1177ea51271da3e6bc188b8a5927747a30c0bd96cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:43 compute-0 podman[87051]: 2026-01-31 05:54:43.291575813 +0000 UTC m=+0.410047110 container start 992c3eed0bdda9df89ade1177ea51271da3e6bc188b8a5927747a30c0bd96cc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: pidfile_write: ignore empty --pid-file
Jan 31 05:54:43 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 bash[87051]: 992c3eed0bdda9df89ade1177ea51271da3e6bc188b8a5927747a30c0bd96cc2
Jan 31 05:54:43 compute-0 systemd[1]: Started Ceph osd.1 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:43 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:43 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e56000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 sudo[86155]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:43 compute-0 ceph-mon[75251]: purged_snaps scrub starts
Jan 31 05:54:43 compute-0 ceph-mon[75251]: purged_snaps scrub ok
Jan 31 05:54:43 compute-0 ceph-mon[75251]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:43 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:43 compute-0 ceph-osd[87070]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 05:54:43 compute-0 ceph-osd[87070]: load: jerasure load: lrc 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 05:54:43 compute-0 ceph-osd[87070]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562641e57c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount shared_bdev_used = 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Git sha 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DB SUMMARY
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DB Session ID:  ERFXH2SUEJDPDGER5U8D
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                     Options.env: 0x562641ce7ea0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                Options.info_log: 0x562642d4a8a0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.write_buffer_manager: 0x562641d48b40
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Compression algorithms supported:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d95cf394-3c06-47d7-9082-28b91c433fca
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883693653, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883694899, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: freelist init
Jan 31 05:54:43 compute-0 ceph-osd[87070]: freelist _read_cfg
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs umount
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bdev(0x562642aed800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluefs mount shared_bdev_used = 27262976
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Git sha 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DB SUMMARY
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DB Session ID:  ERFXH2SUEJDPDGER5U8C
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                     Options.env: 0x562641ce7ce0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                Options.info_log: 0x562642d4aa20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.write_buffer_manager: 0x562641d49900
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Compression algorithms supported:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4b840)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4bd80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4bd80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562642d4bd80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x562641ceba30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d95cf394-3c06-47d7-9082-28b91c433fca
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883732879, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883762250, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d95cf394-3c06-47d7-9082-28b91c433fca", "db_session_id": "ERFXH2SUEJDPDGER5U8C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 31 05:54:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 05:54:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:43 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:43 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 31 05:54:43 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883817772, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d95cf394-3c06-47d7-9082-28b91c433fca", "db_session_id": "ERFXH2SUEJDPDGER5U8C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:43 compute-0 sudo[87486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:43 compute-0 sudo[87486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:43 compute-0 sudo[87486]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883845789, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838883, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d95cf394-3c06-47d7-9082-28b91c433fca", "db_session_id": "ERFXH2SUEJDPDGER5U8C", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838883855902, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 05:54:43 compute-0 sudo[87511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:54:43 compute-0 sudo[87511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562642f52000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: DB pointer 0x562642f04000
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 05:54:43 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:54:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 0.00015 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 05:54:44 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 05:54:44 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 05:54:44 compute-0 ceph-osd[87070]: _get_class not permitted to load lua
Jan 31 05:54:44 compute-0 ceph-osd[87070]: _get_class not permitted to load sdk
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 load_pgs
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 load_pgs opened 0 pgs
Jan 31 05:54:44 compute-0 ceph-osd[87070]: osd.1 0 log_to_monitors true
Jan 31 05:54:44 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1[87066]: 2026-01-31T05:54:44.011+0000 7fcd692708c0 -1 osd.1 0 log_to_monitors true
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.385612139 +0000 UTC m=+0.077390559 container create 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:54:44
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: [balancer INFO root] No pools available
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.3345014 +0000 UTC m=+0.026279870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:44 compute-0 systemd[1]: Started libpod-conmon-0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be.scope.
Jan 31 05:54:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.575012608 +0000 UTC m=+0.266790998 container init 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.584062814 +0000 UTC m=+0.275841204 container start 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:44 compute-0 thirsty_black[87630]: 167 167
Jan 31 05:54:44 compute-0 systemd[1]: libpod-0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be.scope: Deactivated successfully.
Jan 31 05:54:44 compute-0 conmon[87630]: conmon 0f9b669d05d0585a5e2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be.scope/container/memory.events
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.641026628 +0000 UTC m=+0.332805098 container attach 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:54:44 compute-0 podman[87613]: 2026-01-31 05:54:44.642905842 +0000 UTC m=+0.334684272 container died 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-011e54c1dbf451da02d2e5753154b2ef5cc59f689f3df91405285e72ee445c8f-merged.mount: Deactivated successfully.
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:44 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:44 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 05:54:44 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 05:54:45 compute-0 podman[87613]: 2026-01-31 05:54:45.135015694 +0000 UTC m=+0.826794114 container remove 0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_black, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:45 compute-0 systemd[1]: libpod-conmon-0f9b669d05d0585a5e2b1ef7ca00d71dc4f413f34a067f608af42c47c587e3be.scope: Deactivated successfully.
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:45 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.380425342 +0000 UTC m=+0.025941729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.589909172 +0000 UTC m=+0.235425539 container create 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:54:45 compute-0 ceph-mon[75251]: Deploying daemon osd.2 on compute-0
Jan 31 05:54:45 compute-0 ceph-mon[75251]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 05:54:45 compute-0 ceph-mon[75251]: osdmap e9: 3 total, 0 up, 3 in
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:45 compute-0 systemd[1]: Started libpod-conmon-0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8.scope.
Jan 31 05:54:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.752058387 +0000 UTC m=+0.397574814 container init 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.760955308 +0000 UTC m=+0.406471695 container start 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.815068922 +0000 UTC m=+0.460585319 container attach 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 05:54:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 05:54:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:45 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test[87677]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 05:54:45 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test[87677]:                             [--no-systemd] [--no-tmpfs]
Jan 31 05:54:45 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test[87677]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 05:54:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 done with init, starting boot process
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 start_boot
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 05:54:45 compute-0 ceph-osd[87070]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 05:54:45 compute-0 systemd[1]: libpod-0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8.scope: Deactivated successfully.
Jan 31 05:54:45 compute-0 podman[87660]: 2026-01-31 05:54:45.944861614 +0000 UTC m=+0.590378001 container died 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-25582ceefe2046850ce1da8b4df4b444ee292e6b0dd935b0a746ea7314d76d65-merged.mount: Deactivated successfully.
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:46 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:46 compute-0 podman[87660]: 2026-01-31 05:54:46.678464186 +0000 UTC m=+1.323980573 container remove 0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:46 compute-0 systemd[1]: libpod-conmon-0e3a5d75ddd8e915855c9c2f28204adcc556d10197d7900c75e4889baf812db8.scope: Deactivated successfully.
Jan 31 05:54:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:47 compute-0 ceph-mon[75251]: osdmap e10: 3 total, 0 up, 3 in
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:47 compute-0 ceph-mon[75251]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:47 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:47 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:47 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:47 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:47 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:47 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:47 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:47 compute-0 systemd[1]: Reloading.
Jan 31 05:54:47 compute-0 systemd-rc-local-generator[87738]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:47 compute-0 systemd-sysv-generator[87742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:47 compute-0 systemd[1]: Reloading.
Jan 31 05:54:47 compute-0 systemd-rc-local-generator[87776]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:54:47 compute-0 systemd-sysv-generator[87779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:54:47 compute-0 systemd[1]: Starting Ceph osd.2 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:54:48 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:48 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:48 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:48 compute-0 podman[87839]: 2026-01-31 05:54:48.129845257 +0000 UTC m=+0.020886532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:48 compute-0 podman[87839]: 2026-01-31 05:54:48.312094063 +0000 UTC m=+0.203135328 container create bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:48 compute-0 ceph-mon[75251]: purged_snaps scrub starts
Jan 31 05:54:48 compute-0 ceph-mon[75251]: purged_snaps scrub ok
Jan 31 05:54:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:48 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:48 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:48 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:48 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:48 compute-0 podman[87839]: 2026-01-31 05:54:48.63062574 +0000 UTC m=+0.521666995 container init bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:48 compute-0 podman[87839]: 2026-01-31 05:54:48.638652221 +0000 UTC m=+0.529693486 container start bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:54:48 compute-0 podman[87839]: 2026-01-31 05:54:48.687086276 +0000 UTC m=+0.578127531 container attach bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:48 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:48 compute-0 bash[87839]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:48 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:48 compute-0 bash[87839]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:49 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:49 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:49 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:49 compute-0 ceph-mgr[75550]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 05:54:49 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:49 compute-0 lvm[87941]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:54:49 compute-0 lvm[87941]: VG ceph_vg1 finished
Jan 31 05:54:49 compute-0 lvm[87940]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:49 compute-0 lvm[87940]: VG ceph_vg0 finished
Jan 31 05:54:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:49 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:49 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:49 compute-0 ceph-mon[75251]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:49 compute-0 lvm[87943]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:54:49 compute-0 lvm[87943]: VG ceph_vg2 finished
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:49 compute-0 bash[87839]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:49 compute-0 bash[87839]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate[87854]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 05:54:49 compute-0 bash[87839]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 05:54:49 compute-0 systemd[1]: libpod-bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d.scope: Deactivated successfully.
Jan 31 05:54:49 compute-0 systemd[1]: libpod-bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d.scope: Consumed 1.476s CPU time.
Jan 31 05:54:49 compute-0 podman[87839]: 2026-01-31 05:54:49.722801511 +0000 UTC m=+1.613842766 container died bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 12.490 iops: 3197.541 elapsed_sec: 0.938
Jan 31 05:54:49 compute-0 ceph-osd[86016]: log_channel(cluster) log [WRN] : OSD bench result of 3197.541208 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 0 waiting for initial osdmap
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0[86012]: 2026-01-31T05:54:49.727+0000 7f437a1d9640 -1 osd.0 0 waiting for initial osdmap
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 05:54:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-517dde54f2f7496d9f27c0ba79ac6fa49a469262ff1626b7bbf1caeca87ca5d4-merged.mount: Deactivated successfully.
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 31 05:54:49 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-0[86012]: 2026-01-31T05:54:49.804+0000 7f4374fde640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:54:49 compute-0 ceph-osd[86016]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 31 05:54:49 compute-0 podman[87839]: 2026-01-31 05:54:49.877615788 +0000 UTC m=+1.768657013 container remove bc0ba08e5e3ee118a19d6378abca2141d36f050f1032413b3ce7601503b62d8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:50 compute-0 podman[88108]: 2026-01-31 05:54:50.092738737 +0000 UTC m=+0.047867986 container create 050c75d570ab0e059b0d127a16a6ff61f657d4d4f6c203afa5321dd0077e8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f93644c08f0e950186e928824d64cc854a39e67299151ab1b41fd9663df365/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f93644c08f0e950186e928824d64cc854a39e67299151ab1b41fd9663df365/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f93644c08f0e950186e928824d64cc854a39e67299151ab1b41fd9663df365/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f93644c08f0e950186e928824d64cc854a39e67299151ab1b41fd9663df365/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89f93644c08f0e950186e928824d64cc854a39e67299151ab1b41fd9663df365/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:50 compute-0 podman[88108]: 2026-01-31 05:54:50.071658549 +0000 UTC m=+0.026787848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:50 compute-0 podman[88108]: 2026-01-31 05:54:50.180780218 +0000 UTC m=+0.135909497 container init 050c75d570ab0e059b0d127a16a6ff61f657d4d4f6c203afa5321dd0077e8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:50 compute-0 podman[88108]: 2026-01-31 05:54:50.193606386 +0000 UTC m=+0.148735625 container start 050c75d570ab0e059b0d127a16a6ff61f657d4d4f6c203afa5321dd0077e8d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 05:54:50 compute-0 bash[88108]: 050c75d570ab0e059b0d127a16a6ff61f657d4d4f6c203afa5321dd0077e8d63
Jan 31 05:54:50 compute-0 systemd[1]: Started Ceph osd.2 for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:54:50 compute-0 ceph-osd[88127]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: pidfile_write: ignore empty --pid-file
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 sudo[87511]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 sudo[88143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:50 compute-0 sudo[88143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:50 compute-0 sudo[88143]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d0000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2265278321; not ready for session (expect reconnect)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 05:54:50 compute-0 sudo[88175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:54:50 compute-0 sudo[88175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:50 compute-0 ceph-osd[88127]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 31 05:54:50 compute-0 ceph-osd[88127]: load: jerasure load: lrc 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321] boot
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[86016]: osd.0 11 state: booting -> active
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:50 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:50 compute-0 ceph-osd[88127]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a06d1c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount shared_bdev_used = 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Git sha 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DB SUMMARY
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DB Session ID:  PNLO7XWPTARER3SBTMW7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                     Options.env: 0x5633a0561ea0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                Options.info_log: 0x5633a15b28a0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.write_buffer_manager: 0x5633a05c6b40
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Compression algorithms supported:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a0565a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a0565a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b2c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a0565a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 496cf73c-c699-49b3-abb8-44ddf408f48b
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890596901, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890598535, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: freelist init
Jan 31 05:54:50 compute-0 ceph-osd[88127]: freelist _read_cfg
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs umount
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bdev(0x5633a1367800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluefs mount shared_bdev_used = 27262976
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: RocksDB version: 7.9.2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Git sha 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DB SUMMARY
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DB Session ID:  PNLO7XWPTARER3SBTMW6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: CURRENT file:  CURRENT
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.error_if_exists: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.create_if_missing: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                     Options.env: 0x5633a0561ea0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                Options.info_log: 0x5633a15b36e0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.statistics: (nil)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.use_fsync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.db_log_dir: 
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.write_buffer_manager: 0x5633a05c7900
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.unordered_write: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.row_cache: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                              Options.wal_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.two_write_queues: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.wal_compression: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.atomic_flush: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_background_jobs: 4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_background_compactions: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_subcompactions: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.max_open_files: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Compression algorithms supported:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZSTD supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kXpressCompression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kBZip2Compression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kLZ4Compression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kZlibCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         kSnappyCompression supported: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3b40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05658d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3ae0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05654b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3ae0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05654b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:           Options.merge_operator: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633a15b3ae0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633a05654b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.compression: LZ4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.num_levels: 7
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.bloom_locality: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                               Options.ttl: 2592000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                       Options.enable_blob_files: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                           Options.min_blob_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 496cf73c-c699-49b3-abb8-44ddf408f48b
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890640390, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890667011, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838890, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "496cf73c-c699-49b3-abb8-44ddf408f48b", "db_session_id": "PNLO7XWPTARER3SBTMW6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890698644, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838890, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "496cf73c-c699-49b3-abb8-44ddf408f48b", "db_session_id": "PNLO7XWPTARER3SBTMW6", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.70379346 +0000 UTC m=+0.057404900 container create a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890727495, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838890, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "496cf73c-c699-49b3-abb8-44ddf408f48b", "db_session_id": "PNLO7XWPTARER3SBTMW6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769838890730437, "job": 1, "event": "recovery_finished"}
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 05:54:50 compute-0 systemd[1]: Started libpod-conmon-a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69.scope.
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.668955201 +0000 UTC m=+0.022566671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.838913059 +0000 UTC m=+0.192524539 container init a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.844350989 +0000 UTC m=+0.197962429 container start a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:50 compute-0 eloquent_goodall[88622]: 167 167
Jan 31 05:54:50 compute-0 systemd[1]: libpod-a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69.scope: Deactivated successfully.
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.87008336 +0000 UTC m=+0.223694810 container attach a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:54:50 compute-0 podman[88531]: 2026-01-31 05:54:50.870453862 +0000 UTC m=+0.224065302 container died a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5633a17a9c00
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: DB pointer 0x5633a176c000
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 05:54:50 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 05:54:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 05:54:50 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 05:54:50 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 05:54:50 compute-0 ceph-osd[88127]: _get_class not permitted to load lua
Jan 31 05:54:50 compute-0 ceph-osd[88127]: _get_class not permitted to load sdk
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 load_pgs
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 load_pgs opened 0 pgs
Jan 31 05:54:50 compute-0 ceph-osd[88127]: osd.2 0 log_to_monitors true
Jan 31 05:54:50 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2[88123]: 2026-01-31T05:54:50.877+0000 7f66ffb6e8c0 -1 osd.2 0 log_to_monitors true
Jan 31 05:54:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 31 05:54:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 05:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3af8b7c9155d6239a41ab3f38b36e61fb0e8659fee09a9664372c75434cc301b-merged.mount: Deactivated successfully.
Jan 31 05:54:51 compute-0 podman[88531]: 2026-01-31 05:54:51.045332232 +0000 UTC m=+0.398943712 container remove a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:54:51 compute-0 systemd[1]: libpod-conmon-a4528d1049fe2c8e17a99bb380a7e5b22daa4b0a4d59b6e40c7efe560f4d0b69.scope: Deactivated successfully.
Jan 31 05:54:51 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:51 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:51 compute-0 podman[88681]: 2026-01-31 05:54:51.246375428 +0000 UTC m=+0.076200018 container create 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:54:51 compute-0 podman[88681]: 2026-01-31 05:54:51.198504602 +0000 UTC m=+0.028329252 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:51 compute-0 ceph-mgr[75550]: [devicehealth INFO root] creating mgr pool
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 05:54:51 compute-0 systemd[1]: Started libpod-conmon-818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c.scope.
Jan 31 05:54:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280ffe4c8c8f1fa694a4555170f797ff7627c0b0eaa01f7317434a9a9ba413b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280ffe4c8c8f1fa694a4555170f797ff7627c0b0eaa01f7317434a9a9ba413b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280ffe4c8c8f1fa694a4555170f797ff7627c0b0eaa01f7317434a9a9ba413b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7280ffe4c8c8f1fa694a4555170f797ff7627c0b0eaa01f7317434a9a9ba413b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:51 compute-0 podman[88681]: 2026-01-31 05:54:51.430867954 +0000 UTC m=+0.260692554 container init 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:54:51 compute-0 podman[88681]: 2026-01-31 05:54:51.438054906 +0000 UTC m=+0.267879466 container start 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:51 compute-0 podman[88681]: 2026-01-31 05:54:51.44302603 +0000 UTC m=+0.272850640 container attach 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 05:54:51 compute-0 ceph-mon[75251]: OSD bench result of 3197.541208 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:54:51 compute-0 ceph-mon[75251]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 05:54:51 compute-0 ceph-mon[75251]: osd.0 [v2:192.168.122.100:6802/2265278321,v1:192.168.122.100:6803/2265278321] boot
Jan 31 05:54:51 compute-0 ceph-mon[75251]: osdmap e11: 3 total, 1 up, 3 in
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:51 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:51 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 31 05:54:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 05:54:51 compute-0 ceph-osd[86016]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 05:54:51 compute-0 ceph-osd[86016]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 05:54:51 compute-0 ceph-osd[86016]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.284 iops: 9288.816 elapsed_sec: 0.323
Jan 31 05:54:51 compute-0 ceph-osd[87070]: log_channel(cluster) log [WRN] : OSD bench result of 9288.815697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 0 waiting for initial osdmap
Jan 31 05:54:51 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1[87066]: 2026-01-31T05:54:51.768+0000 7fcd651f2640 -1 osd.1 0 waiting for initial osdmap
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 set_numa_affinity not setting numa affinity
Jan 31 05:54:51 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-1[87066]: 2026-01-31T05:54:51.791+0000 7fcd5fff7640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:54:51 compute-0 ceph-osd[87070]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 31 05:54:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 05:54:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 05:54:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1207044519; not ready for session (expect reconnect)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 05:54:52 compute-0 lvm[88775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:54:52 compute-0 lvm[88775]: VG ceph_vg0 finished
Jan 31 05:54:52 compute-0 lvm[88777]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:54:52 compute-0 lvm[88777]: VG ceph_vg1 finished
Jan 31 05:54:52 compute-0 lvm[88778]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:54:52 compute-0 lvm[88778]: VG ceph_vg2 finished
Jan 31 05:54:52 compute-0 unruffled_boyd[88700]: {}
Jan 31 05:54:52 compute-0 systemd[1]: libpod-818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c.scope: Deactivated successfully.
Jan 31 05:54:52 compute-0 systemd[1]: libpod-818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c.scope: Consumed 1.271s CPU time.
Jan 31 05:54:52 compute-0 podman[88681]: 2026-01-31 05:54:52.275082166 +0000 UTC m=+1.104906726 container died 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7280ffe4c8c8f1fa694a4555170f797ff7627c0b0eaa01f7317434a9a9ba413b-merged.mount: Deactivated successfully.
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 05:54:52 compute-0 podman[88681]: 2026-01-31 05:54:52.420404302 +0000 UTC m=+1.250228902 container remove 818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:54:52 compute-0 systemd[1]: libpod-conmon-818e4082a6f8144c2751b8c2ae94f06d246719cfee4b3e37fb0dc077ef71943c.scope: Deactivated successfully.
Jan 31 05:54:52 compute-0 sudo[88175]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 done with init, starting boot process
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 start_boot
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 05:54:52 compute-0 ceph-osd[88127]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519] boot
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Jan 31 05:54:52 compute-0 ceph-osd[87070]: osd.1 13 state: booting -> active
Jan 31 05:54:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 05:54:52 compute-0 ceph-mon[75251]: osdmap e12: 3 total, 1 up, 3 in
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: OSD bench result of 9288.815697 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:52 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:52 compute-0 sudo[88795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:54:52 compute-0 sudo[88795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:52 compute-0 sudo[88795]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:52 compute-0 sudo[88820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:52 compute-0 sudo[88820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:52 compute-0 sudo[88820]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:52 compute-0 sudo[88845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:54:52 compute-0 sudo[88845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:53 compute-0 podman[88916]: 2026-01-31 05:54:53.337890169 +0000 UTC m=+0.131217333 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:53 compute-0 podman[88916]: 2026-01-31 05:54:53.457417082 +0000 UTC m=+0.250744156 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:54:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 05:54:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 31 05:54:53 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 31 05:54:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:53 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:53 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:54:53 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:53 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:53 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:53 compute-0 ceph-mon[75251]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 05:54:53 compute-0 ceph-mon[75251]: osd.1 [v2:192.168.122.100:6806/1207044519,v1:192.168.122.100:6807/1207044519] boot
Jan 31 05:54:53 compute-0 ceph-mon[75251]: osdmap e13: 3 total, 2 up, 3 in
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:53 compute-0 ceph-mon[75251]: osdmap e14: 3 total, 2 up, 3 in
Jan 31 05:54:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:53 compute-0 ceph-mgr[75550]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 05:54:53 compute-0 sudo[89057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iayzvtzdexidgcwthsgriflfganjdtgo ; /usr/bin/python3'
Jan 31 05:54:53 compute-0 sudo[89057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Check health
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 05:54:54 compute-0 sudo[89091]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 31 05:54:54 compute-0 sudo[89091]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 05:54:54 compute-0 sudo[89091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 31 05:54:54 compute-0 sudo[89091]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:54:54 compute-0 python3[89063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:54 compute-0 sudo[88845]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:54 compute-0 podman[89108]: 2026-01-31 05:54:54.236293029 +0000 UTC m=+0.052720676 container create a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:54 compute-0 systemd[1]: Started libpod-conmon-a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03.scope.
Jan 31 05:54:54 compute-0 sudo[89122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:54 compute-0 sudo[89122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:54 compute-0 sudo[89122]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:54 compute-0 podman[89108]: 2026-01-31 05:54:54.201345556 +0000 UTC m=+0.017773213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984a95d29ecc97aa715407a9680318dcddccb37298d6f3c35b51b43e2b5c792f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984a95d29ecc97aa715407a9680318dcddccb37298d6f3c35b51b43e2b5c792f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/984a95d29ecc97aa715407a9680318dcddccb37298d6f3c35b51b43e2b5c792f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:54 compute-0 sudo[89152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- inventory --format=json-pretty --filter-for-batch
Jan 31 05:54:54 compute-0 sudo[89152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:54 compute-0 podman[89108]: 2026-01-31 05:54:54.445950526 +0000 UTC m=+0.262378203 container init a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:54:54 compute-0 podman[89108]: 2026-01-31 05:54:54.451127027 +0000 UTC m=+0.267554684 container start a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:54 compute-0 podman[89108]: 2026-01-31 05:54:54.607892843 +0000 UTC m=+0.424320520 container attach a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 31 05:54:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:54 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:54 compute-0 ceph-mon[75251]: purged_snaps scrub starts
Jan 31 05:54:54 compute-0 ceph-mon[75251]: purged_snaps scrub ok
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:54 compute-0 ceph-mon[75251]: pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.706422441 +0000 UTC m=+0.068612392 container create 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.661403385 +0000 UTC m=+0.023593426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:54 compute-0 systemd[1]: Started libpod-conmon-9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148.scope.
Jan 31 05:54:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.834867386 +0000 UTC m=+0.197057417 container init 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.841028481 +0000 UTC m=+0.203218462 container start 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:54:54 compute-0 quirky_satoshi[89226]: 167 167
Jan 31 05:54:54 compute-0 systemd[1]: libpod-9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148.scope: Deactivated successfully.
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.871821129 +0000 UTC m=+0.234011120 container attach 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:54:54 compute-0 podman[89209]: 2026-01-31 05:54:54.872272285 +0000 UTC m=+0.234462276 container died 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-65416491d4e9a063838557ae6be6b650648b154b8d0b65875e58dc5b9e2d9428-merged.mount: Deactivated successfully.
Jan 31 05:54:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 05:54:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205301435' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:54:55 compute-0 competent_mirzakhani[89147]: 
Jan 31 05:54:55 compute-0 competent_mirzakhani[89147]: {"fsid":"797ee2fc-ca49-5eee-87c0-542bb035a7d7","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1769838892,"num_in_osds":3,"osd_in_since":1769838871,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447016960,"bytes_avail":21023625216,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-31T05:53:24:897973+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T05:54:46.397082+0000","services":{}},"progress_events":{}}
Jan 31 05:54:55 compute-0 systemd[1]: libpod-a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03.scope: Deactivated successfully.
Jan 31 05:54:55 compute-0 podman[89209]: 2026-01-31 05:54:55.275673122 +0000 UTC m=+0.637863083 container remove 9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:54:55 compute-0 systemd[1]: libpod-conmon-9c956b9b4dcfd7a4e9fab3963a6a8b06d396f520f227ebc37b622778042d3148.scope: Deactivated successfully.
Jan 31 05:54:55 compute-0 podman[89108]: 2026-01-31 05:54:55.364620605 +0000 UTC m=+1.181048262 container died a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:54:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vavqfa(active, since 71s)
Jan 31 05:54:55 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:55 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:56 compute-0 ceph-mon[75251]: osdmap e15: 3 total, 2 up, 3 in
Jan 31 05:54:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:56 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2205301435' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:54:56 compute-0 ceph-mon[75251]: mgrmap e9: compute-0.vavqfa(active, since 71s)
Jan 31 05:54:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-984a95d29ecc97aa715407a9680318dcddccb37298d6f3c35b51b43e2b5c792f-merged.mount: Deactivated successfully.
Jan 31 05:54:56 compute-0 podman[89245]: 2026-01-31 05:54:56.207867353 +0000 UTC m=+1.162407089 container remove a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03 (image=quay.io/ceph/ceph:v20, name=competent_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:54:56 compute-0 systemd[1]: libpod-conmon-a3a6e991fe0cf1fb48d2babd9dcb9710bdf9f31ce838c3d4d79a04e89c890e03.scope: Deactivated successfully.
Jan 31 05:54:56 compute-0 sudo[89057]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:56 compute-0 podman[89264]: 2026-01-31 05:54:56.279245957 +0000 UTC m=+0.909506234 container create a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:54:56 compute-0 podman[89264]: 2026-01-31 05:54:56.222717756 +0000 UTC m=+0.852978113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:56 compute-0 systemd[1]: Started libpod-conmon-a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04.scope.
Jan 31 05:54:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1626c248885b04285e0bc4b8525fec712071434a2d23c8314f4e0dc3c80a9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1626c248885b04285e0bc4b8525fec712071434a2d23c8314f4e0dc3c80a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1626c248885b04285e0bc4b8525fec712071434a2d23c8314f4e0dc3c80a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1626c248885b04285e0bc4b8525fec712071434a2d23c8314f4e0dc3c80a9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 podman[89264]: 2026-01-31 05:54:56.462466888 +0000 UTC m=+1.092727265 container init a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:54:56 compute-0 podman[89264]: 2026-01-31 05:54:56.471089038 +0000 UTC m=+1.101349315 container start a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:54:56 compute-0 podman[89264]: 2026-01-31 05:54:56.499171638 +0000 UTC m=+1.129431955 container attach a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 05:54:56 compute-0 sudo[89310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uacqhyblldcjrxvrcwlfhaakdnwedrdo ; /usr/bin/python3'
Jan 31 05:54:56 compute-0 sudo[89310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:56 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:56 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:56 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:56 compute-0 python3[89312]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:56 compute-0 podman[89318]: 2026-01-31 05:54:56.813604615 +0000 UTC m=+0.100984527 container create 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:54:56 compute-0 podman[89318]: 2026-01-31 05:54:56.732418279 +0000 UTC m=+0.019798231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:56 compute-0 systemd[1]: Started libpod-conmon-54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10.scope.
Jan 31 05:54:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:54:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151c358d305dc1b7e01a9dfe2e8c87bcc6989e1d147a3f60704b6c278bbba5c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/151c358d305dc1b7e01a9dfe2e8c87bcc6989e1d147a3f60704b6c278bbba5c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:54:56 compute-0 eager_leavitt[89282]: [
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:     {
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "available": false,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "being_replaced": false,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "ceph_device_lvm": false,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "lsm_data": {},
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "lvs": [],
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "path": "/dev/sr0",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "rejected_reasons": [
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "Has a FileSystem",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "Insufficient space (<5GB)"
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         ],
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         "sys_api": {
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "actuators": null,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "device_nodes": [
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:                 "sr0"
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             ],
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "devname": "sr0",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "human_readable_size": "482.00 KB",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "id_bus": "ata",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "model": "QEMU DVD-ROM",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "nr_requests": "2",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "parent": "/dev/sr0",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "partitions": {},
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "path": "/dev/sr0",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "removable": "1",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "rev": "2.5+",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "ro": "0",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "rotational": "1",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "sas_address": "",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "sas_device_handle": "",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "scheduler_mode": "mq-deadline",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "sectors": 0,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "sectorsize": "2048",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "size": 493568.0,
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "support_discard": "2048",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "type": "disk",
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:             "vendor": "QEMU"
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:         }
Jan 31 05:54:56 compute-0 eager_leavitt[89282]:     }
Jan 31 05:54:56 compute-0 eager_leavitt[89282]: ]
Jan 31 05:54:56 compute-0 systemd[1]: libpod-a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04.scope: Deactivated successfully.
Jan 31 05:54:57 compute-0 podman[89318]: 2026-01-31 05:54:57.013834358 +0000 UTC m=+0.301214290 container init 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:54:57 compute-0 podman[89318]: 2026-01-31 05:54:57.019033533 +0000 UTC m=+0.306413485 container start 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: pgmap v42: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:57 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:57 compute-0 podman[89318]: 2026-01-31 05:54:57.116230223 +0000 UTC m=+0.403610135 container attach 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 05:54:57 compute-0 podman[89264]: 2026-01-31 05:54:57.151433521 +0000 UTC m=+1.781693828 container died a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:54:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1626c248885b04285e0bc4b8525fec712071434a2d23c8314f4e0dc3c80a9b-merged.mount: Deactivated successfully.
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/537910356' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:57 compute-0 podman[89264]: 2026-01-31 05:54:57.737153206 +0000 UTC m=+2.367413463 container remove a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:54:57 compute-0 systemd[1]: libpod-conmon-a6c4bfadca16c8568f4735f1727b851ab5decdd6236bcd26b30a3c35289aff04.scope: Deactivated successfully.
Jan 31 05:54:57 compute-0 sudo[89152]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43683k
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43683k
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Jan 31 05:54:57 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:54:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:58 compute-0 sudo[90153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:54:58 compute-0 sudo[90153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:58 compute-0 sudo[90153]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:58 compute-0 sudo[90178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:54:58 compute-0 sudo[90178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:54:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/537910356' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:54:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:54:58 compute-0 podman[90216]: 2026-01-31 05:54:58.445457157 +0000 UTC m=+0.018386602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:54:58 compute-0 podman[90216]: 2026-01-31 05:54:58.556343958 +0000 UTC m=+0.129273443 container create f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:54:58 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:58 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:58 compute-0 systemd[1]: Started libpod-conmon-f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112.scope.
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/537910356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 05:54:58 compute-0 wizardly_dhawan[89686]: pool 'vms' created
Jan 31 05:54:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 05:54:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:58 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:58 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:58 compute-0 systemd[1]: libpod-54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10.scope: Deactivated successfully.
Jan 31 05:54:58 compute-0 podman[90216]: 2026-01-31 05:54:58.825775784 +0000 UTC m=+0.398705239 container init f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:54:58 compute-0 podman[89318]: 2026-01-31 05:54:58.827825641 +0000 UTC m=+2.115205553 container died 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:54:58 compute-0 podman[90216]: 2026-01-31 05:54:58.831255937 +0000 UTC m=+0.404185382 container start f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:54:58 compute-0 nice_galois[90232]: 167 167
Jan 31 05:54:58 compute-0 systemd[1]: libpod-f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112.scope: Deactivated successfully.
Jan 31 05:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-151c358d305dc1b7e01a9dfe2e8c87bcc6989e1d147a3f60704b6c278bbba5c0-merged.mount: Deactivated successfully.
Jan 31 05:54:59 compute-0 podman[89318]: 2026-01-31 05:54:59.368881034 +0000 UTC m=+2.656260946 container remove 54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10 (image=quay.io/ceph/ceph:v20, name=wizardly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:54:59 compute-0 sudo[89310]: pam_unix(sudo:session): session closed for user root
Jan 31 05:54:59 compute-0 systemd[1]: libpod-conmon-54370432b3cd44e370fc90a4f2cbb77ec47382792e91eac8569c24f2a8597a10.scope: Deactivated successfully.
Jan 31 05:54:59 compute-0 sudo[90285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjqkficflgpyjmfqractrxknjuzmcra ; /usr/bin/python3'
Jan 31 05:54:59 compute-0 sudo[90285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:54:59 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:54:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:54:59 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:59 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:54:59 compute-0 podman[90216]: 2026-01-31 05:54:59.645162031 +0000 UTC m=+1.218091476 container attach f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:54:59 compute-0 podman[90216]: 2026-01-31 05:54:59.645479629 +0000 UTC m=+1.218409074 container died f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:59 compute-0 python3[90287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:54:59 compute-0 ceph-mon[75251]: Adjusting osd_memory_target on compute-0 to 43683k
Jan 31 05:54:59 compute-0 ceph-mon[75251]: Unable to set osd_memory_target on compute-0 to 44731596: error parsing value: Value '44731596' is below minimum 939524096
Jan 31 05:54:59 compute-0 ceph-mon[75251]: pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:54:59 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:59 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/537910356' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:54:59 compute-0 ceph-mon[75251]: osdmap e16: 3 total, 2 up, 3 in
Jan 31 05:54:59 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1594f90462321ae189f590689ec568caa574e4bc6451117f2ca9e55f4e898a2d-merged.mount: Deactivated successfully.
Jan 31 05:54:59 compute-0 podman[90250]: 2026-01-31 05:54:59.917773255 +0000 UTC m=+1.074278679 container remove f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:54:59 compute-0 systemd[1]: libpod-conmon-f6a3ef78f22dcbe791e80c89646520a5ef03742ed995cb8ebc5d64e0c2351112.scope: Deactivated successfully.
Jan 31 05:54:59 compute-0 podman[90288]: 2026-01-31 05:54:59.834917233 +0000 UTC m=+0.151519101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:54:59 compute-0 podman[90288]: 2026-01-31 05:54:59.94098789 +0000 UTC m=+0.257589748 container create d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:54:59 compute-0 systemd[1]: Started libpod-conmon-d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc.scope.
Jan 31 05:55:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebfaf71d70f739c15a4cea628fb5962f68e373218b093d50e4ee822ef65b254/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebfaf71d70f739c15a4cea628fb5962f68e373218b093d50e4ee822ef65b254/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 podman[90288]: 2026-01-31 05:55:00.032784001 +0000 UTC m=+0.349385879 container init d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:00 compute-0 podman[90288]: 2026-01-31 05:55:00.037321657 +0000 UTC m=+0.353923515 container start d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:00 compute-0 podman[90288]: 2026-01-31 05:55:00.040041533 +0000 UTC m=+0.356643421 container attach d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.059230536 +0000 UTC m=+0.061477509 container create 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:55:00 compute-0 systemd[1]: Started libpod-conmon-742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f.scope.
Jan 31 05:55:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.130087075 +0000 UTC m=+0.132334068 container init 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.041503773 +0000 UTC m=+0.043750766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.135435213 +0000 UTC m=+0.137682186 container start 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.312863823 +0000 UTC m=+0.315110876 container attach 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:55:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v45: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.968 iops: 5367.837 elapsed_sec: 0.559
Jan 31 05:55:00 compute-0 ceph-osd[88127]: log_channel(cluster) log [WRN] : OSD bench result of 5367.837324 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 0 waiting for initial osdmap
Jan 31 05:55:00 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2[88123]: 2026-01-31T05:55:00.443+0000 7f66fc302640 -1 osd.2 0 waiting for initial osdmap
Jan 31 05:55:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3808204238' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:55:00 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-osd-2[88123]: 2026-01-31T05:55:00.460+0000 7f66f68f5640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 31 05:55:00 compute-0 musing_lovelace[90333]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:55:00 compute-0 musing_lovelace[90333]: --> All data devices are unavailable
Jan 31 05:55:00 compute-0 systemd[1]: libpod-742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f.scope: Deactivated successfully.
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.510213927 +0000 UTC m=+0.512460910 container died 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7f26da8125959effa07f3cb1db23e7e2312afc2ba9aaff66e4174d5c8b0861c-merged.mount: Deactivated successfully.
Jan 31 05:55:00 compute-0 podman[90315]: 2026-01-31 05:55:00.554622761 +0000 UTC m=+0.556869774 container remove 742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:55:00 compute-0 systemd[1]: libpod-conmon-742fe771a3c050d733a2740fbb4f1120a7d27fad173dc42bb61f274418b1e14f.scope: Deactivated successfully.
Jan 31 05:55:00 compute-0 sudo[90178]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:00 compute-0 ceph-mgr[75550]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2957327295; not ready for session (expect reconnect)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:55:00 compute-0 ceph-mgr[75550]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 05:55:00 compute-0 sudo[90388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:00 compute-0 sudo[90388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:00 compute-0 sudo[90388]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:00 compute-0 sudo[90413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:55:00 compute-0 sudo[90413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3808204238' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 31 05:55:00 compute-0 modest_matsumoto[90310]: pool 'volumes' created
Jan 31 05:55:00 compute-0 systemd[1]: libpod-d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc.scope: Deactivated successfully.
Jan 31 05:55:00 compute-0 podman[90288]: 2026-01-31 05:55:00.753659211 +0000 UTC m=+1.070261059 container died d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295] boot
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 31 05:55:00 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 17 state: booting -> active
Jan 31 05:55:00 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 05:55:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:55:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:55:00 compute-0 ceph-mon[75251]: pgmap v45: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 05:55:00 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3808204238' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:55:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ebfaf71d70f739c15a4cea628fb5962f68e373218b093d50e4ee822ef65b254-merged.mount: Deactivated successfully.
Jan 31 05:55:01 compute-0 podman[90288]: 2026-01-31 05:55:01.056826065 +0000 UTC m=+1.373427933 container remove d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc (image=quay.io/ceph/ceph:v20, name=modest_matsumoto, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:55:01 compute-0 sudo[90285]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.087445006 +0000 UTC m=+0.146003658 container create eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:55:01 compute-0 systemd[1]: libpod-conmon-d441dbe9bd6c3af807e7fbaf5a76ed78a66ccba09b6dd7a7055d0702adff1ecc.scope: Deactivated successfully.
Jan 31 05:55:01 compute-0 systemd[1]: Started libpod-conmon-eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193.scope.
Jan 31 05:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.069349493 +0000 UTC m=+0.127908145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.169244569 +0000 UTC m=+0.227803221 container init eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.174869545 +0000 UTC m=+0.233428167 container start eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:55:01 compute-0 sudo[90505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhktuzjgxjybduraeydnxfehdglvdhst ; /usr/bin/python3'
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.179242036 +0000 UTC m=+0.237800698 container attach eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:01 compute-0 determined_buck[90479]: 167 167
Jan 31 05:55:01 compute-0 systemd[1]: libpod-eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193.scope: Deactivated successfully.
Jan 31 05:55:01 compute-0 sudo[90505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.181820968 +0000 UTC m=+0.240379630 container died eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e37f08f887234fd9f6d19b0e61827f58beac1bb259a2caa9f7ef9393741bb4fa-merged.mount: Deactivated successfully.
Jan 31 05:55:01 compute-0 podman[90463]: 2026-01-31 05:55:01.222144688 +0000 UTC m=+0.280703290 container remove eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:55:01 compute-0 systemd[1]: libpod-conmon-eb142206d5b03793d82031dfb9d7bdf61269e501042460b8565407ac0d791193.scope: Deactivated successfully.
Jan 31 05:55:01 compute-0 python3[90510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:01 compute-0 podman[90529]: 2026-01-31 05:55:01.322939809 +0000 UTC m=+0.037294687 container create 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:55:01 compute-0 podman[90544]: 2026-01-31 05:55:01.359958288 +0000 UTC m=+0.033350668 container create 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 05:55:01 compute-0 systemd[1]: Started libpod-conmon-2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988.scope.
Jan 31 05:55:01 compute-0 systemd[1]: Started libpod-conmon-5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e.scope.
Jan 31 05:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:01 compute-0 podman[90529]: 2026-01-31 05:55:01.306007329 +0000 UTC m=+0.020362257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689a798294f7537ee4d2fb3d06bd44b8ae4c0f8dd3b1ebf3a829f4c4703a060/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689a798294f7537ee4d2fb3d06bd44b8ae4c0f8dd3b1ebf3a829f4c4703a060/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689a798294f7537ee4d2fb3d06bd44b8ae4c0f8dd3b1ebf3a829f4c4703a060/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a689a798294f7537ee4d2fb3d06bd44b8ae4c0f8dd3b1ebf3a829f4c4703a060/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523b70c663e605bf34e7ac98d2a286d3de72ae4a366b133b7cb5f3ce282988e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3523b70c663e605bf34e7ac98d2a286d3de72ae4a366b133b7cb5f3ce282988e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:01 compute-0 podman[90529]: 2026-01-31 05:55:01.426854126 +0000 UTC m=+0.141209034 container init 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:01 compute-0 podman[90544]: 2026-01-31 05:55:01.432885264 +0000 UTC m=+0.106277654 container init 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:55:01 compute-0 podman[90529]: 2026-01-31 05:55:01.433078099 +0000 UTC m=+0.147432977 container start 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:01 compute-0 podman[90544]: 2026-01-31 05:55:01.436431653 +0000 UTC m=+0.109824033 container start 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:55:01 compute-0 podman[90544]: 2026-01-31 05:55:01.346672829 +0000 UTC m=+0.020065229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:01 compute-0 podman[90529]: 2026-01-31 05:55:01.446000538 +0000 UTC m=+0.160355426 container attach 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:01 compute-0 podman[90544]: 2026-01-31 05:55:01.484524829 +0000 UTC m=+0.157917229 container attach 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:55:01 compute-0 musing_driscoll[90559]: {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     "0": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "devices": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "/dev/loop3"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             ],
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_name": "ceph_lv0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_size": "21470642176",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "name": "ceph_lv0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "tags": {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.crush_device_class": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.encrypted": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_id": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.vdo": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.with_tpm": "0"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             },
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "vg_name": "ceph_vg0"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         }
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     ],
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     "1": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "devices": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "/dev/loop4"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             ],
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_name": "ceph_lv1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_size": "21470642176",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "name": "ceph_lv1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "tags": {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.crush_device_class": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.encrypted": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_id": "1",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.vdo": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.with_tpm": "0"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             },
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "vg_name": "ceph_vg1"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         }
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     ],
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     "2": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "devices": [
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "/dev/loop5"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             ],
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_name": "ceph_lv2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_size": "21470642176",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "name": "ceph_lv2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "tags": {
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.crush_device_class": "",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.encrypted": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osd_id": "2",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.vdo": "0",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:                 "ceph.with_tpm": "0"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             },
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "type": "block",
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:             "vg_name": "ceph_vg2"
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:         }
Jan 31 05:55:01 compute-0 musing_driscoll[90559]:     ]
Jan 31 05:55:01 compute-0 musing_driscoll[90559]: }
Jan 31 05:55:01 compute-0 systemd[1]: libpod-2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988.scope: Deactivated successfully.
Jan 31 05:55:01 compute-0 podman[90593]: 2026-01-31 05:55:01.727408738 +0000 UTC m=+0.020632615 container died 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 05:55:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 31 05:55:01 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 31 05:55:01 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:01 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:01 compute-0 podman[90593]: 2026-01-31 05:55:01.772235223 +0000 UTC m=+0.065459070 container remove 2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:01 compute-0 systemd[1]: libpod-conmon-2fee7313a0976807b22526b62caf1229b23d5506386bbfc6e31f087a24703988.scope: Deactivated successfully.
Jan 31 05:55:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:55:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2067060200' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:01 compute-0 sudo[90413]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:01 compute-0 ceph-mon[75251]: OSD bench result of 5367.837324 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 05:55:01 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3808204238' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:01 compute-0 ceph-mon[75251]: osd.2 [v2:192.168.122.100:6810/2957327295,v1:192.168.122.100:6811/2957327295] boot
Jan 31 05:55:01 compute-0 ceph-mon[75251]: osdmap e17: 3 total, 3 up, 3 in
Jan 31 05:55:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 05:55:01 compute-0 ceph-mon[75251]: osdmap e18: 3 total, 3 up, 3 in
Jan 31 05:55:01 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2067060200' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:01 compute-0 sudo[90611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:01 compute-0 sudo[90611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:01 compute-0 sudo[90611]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:01 compute-0 sudo[90636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:55:01 compute-0 sudo[90636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a689a798294f7537ee4d2fb3d06bd44b8ae4c0f8dd3b1ebf3a829f4c4703a060-merged.mount: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.141200735 +0000 UTC m=+0.038865981 container create 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:55:02 compute-0 systemd[1]: Started libpod-conmon-815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd.scope.
Jan 31 05:55:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.213190486 +0000 UTC m=+0.110855712 container init 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.220657823 +0000 UTC m=+0.118323079 container start 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.124348957 +0000 UTC m=+0.022014183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:02 compute-0 xenodochial_chebyshev[90690]: 167 167
Jan 31 05:55:02 compute-0 systemd[1]: libpod-815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd.scope: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.225402125 +0000 UTC m=+0.123067371 container attach 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.226553097 +0000 UTC m=+0.124218333 container died 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc91ab8906eb0fec893f39494932c646224490ffad28456e754231b52bde0dc-merged.mount: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90673]: 2026-01-31 05:55:02.260207102 +0000 UTC m=+0.157872328 container remove 815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:02 compute-0 systemd[1]: libpod-conmon-815551af1d988b447122d06b7c9c521e115e65c0165f70a33ef775a8cbc588cd.scope: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90715]: 2026-01-31 05:55:02.366262279 +0000 UTC m=+0.036084774 container create af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:55:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v48: 3 pgs: 1 creating+peering, 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:02 compute-0 systemd[1]: Started libpod-conmon-af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00.scope.
Jan 31 05:55:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0444c5a147e5f22702d42d9cd2c9db48a81890d43da06e9945eb81ae23e78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0444c5a147e5f22702d42d9cd2c9db48a81890d43da06e9945eb81ae23e78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0444c5a147e5f22702d42d9cd2c9db48a81890d43da06e9945eb81ae23e78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0444c5a147e5f22702d42d9cd2c9db48a81890d43da06e9945eb81ae23e78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:02 compute-0 podman[90715]: 2026-01-31 05:55:02.348066223 +0000 UTC m=+0.017888738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:02 compute-0 podman[90715]: 2026-01-31 05:55:02.460624801 +0000 UTC m=+0.130447296 container init af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:55:02 compute-0 podman[90715]: 2026-01-31 05:55:02.465605639 +0000 UTC m=+0.135428144 container start af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:55:02 compute-0 podman[90715]: 2026-01-31 05:55:02.481921722 +0000 UTC m=+0.151744227 container attach af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 05:55:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2067060200' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 31 05:55:02 compute-0 kind_chebyshev[90564]: pool 'backups' created
Jan 31 05:55:02 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 31 05:55:02 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:02 compute-0 systemd[1]: libpod-5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e.scope: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90544]: 2026-01-31 05:55:02.763988469 +0000 UTC m=+1.437380869 container died 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3523b70c663e605bf34e7ac98d2a286d3de72ae4a366b133b7cb5f3ce282988e-merged.mount: Deactivated successfully.
Jan 31 05:55:02 compute-0 podman[90544]: 2026-01-31 05:55:02.800432491 +0000 UTC m=+1.473824881 container remove 5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e (image=quay.io/ceph/ceph:v20, name=kind_chebyshev, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 05:55:02 compute-0 systemd[1]: libpod-conmon-5bd7488724da298afbc44b087d6a10d2868fda50f6ea706a738079b6837f2f3e.scope: Deactivated successfully.
Jan 31 05:55:02 compute-0 sudo[90505]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:02 compute-0 ceph-mon[75251]: pgmap v48: 3 pgs: 1 creating+peering, 1 unknown, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:02 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2067060200' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:02 compute-0 ceph-mon[75251]: osdmap e19: 3 total, 3 up, 3 in
Jan 31 05:55:02 compute-0 sudo[90828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhlaegrfeobhuewkgcocsdwreqyjzkxb ; /usr/bin/python3'
Jan 31 05:55:02 compute-0 sudo[90828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:03 compute-0 lvm[90844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:55:03 compute-0 lvm[90844]: VG ceph_vg0 finished
Jan 31 05:55:03 compute-0 lvm[90847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:55:03 compute-0 lvm[90847]: VG ceph_vg1 finished
Jan 31 05:55:03 compute-0 python3[90834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:03 compute-0 lvm[90855]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:55:03 compute-0 lvm[90855]: VG ceph_vg2 finished
Jan 31 05:55:03 compute-0 podman[90848]: 2026-01-31 05:55:03.081745708 +0000 UTC m=+0.035734804 container create 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:03 compute-0 systemd[1]: Started libpod-conmon-5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915.scope.
Jan 31 05:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6a8e375b22f04b6b2d1bcc3918ced628ba84f48f2ddcd20234c0e84a6a722f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee6a8e375b22f04b6b2d1bcc3918ced628ba84f48f2ddcd20234c0e84a6a722f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:03 compute-0 podman[90848]: 2026-01-31 05:55:03.068632754 +0000 UTC m=+0.022621860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:03 compute-0 hardcore_chatterjee[90731]: {}
Jan 31 05:55:03 compute-0 podman[90848]: 2026-01-31 05:55:03.180351608 +0000 UTC m=+0.134340804 container init 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:55:03 compute-0 podman[90848]: 2026-01-31 05:55:03.188363061 +0000 UTC m=+0.142352197 container start 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 31 05:55:03 compute-0 podman[90848]: 2026-01-31 05:55:03.192384742 +0000 UTC m=+0.146373868 container attach 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:03 compute-0 podman[90715]: 2026-01-31 05:55:03.207991586 +0000 UTC m=+0.877814091 container died af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:03 compute-0 systemd[1]: libpod-af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00.scope: Deactivated successfully.
Jan 31 05:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b0444c5a147e5f22702d42d9cd2c9db48a81890d43da06e9945eb81ae23e78-merged.mount: Deactivated successfully.
Jan 31 05:55:03 compute-0 podman[90715]: 2026-01-31 05:55:03.252860273 +0000 UTC m=+0.922682808 container remove af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_chatterjee, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:03 compute-0 systemd[1]: libpod-conmon-af20e5b447507a2b09831c20f25120eade8acb6573d89e401880fc7b0209fb00.scope: Deactivated successfully.
Jan 31 05:55:03 compute-0 sudo[90636]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:03 compute-0 sudo[90885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:55:03 compute-0 sudo[90885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:03 compute-0 sudo[90885]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:55:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1657348418' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 05:55:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v50: 4 pgs: 2 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1657348418' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 31 05:55:05 compute-0 cool_burnell[90868]: pool 'images' created
Jan 31 05:55:05 compute-0 systemd[1]: libpod-5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915.scope: Deactivated successfully.
Jan 31 05:55:05 compute-0 podman[90848]: 2026-01-31 05:55:05.284196045 +0000 UTC m=+2.238185151 container died 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 05:55:05 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 31 05:55:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:05 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1657348418' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee6a8e375b22f04b6b2d1bcc3918ced628ba84f48f2ddcd20234c0e84a6a722f-merged.mount: Deactivated successfully.
Jan 31 05:55:05 compute-0 podman[90848]: 2026-01-31 05:55:05.585805785 +0000 UTC m=+2.539794891 container remove 5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915 (image=quay.io/ceph/ceph:v20, name=cool_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 31 05:55:05 compute-0 systemd[1]: libpod-conmon-5aa0f37b02813213f954a45c672433c8a380793f172748bc719f7b6e9899c915.scope: Deactivated successfully.
Jan 31 05:55:05 compute-0 sudo[90828]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:05 compute-0 sudo[90969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazkyjoxvbslelekopunsdgzfeiuspss ; /usr/bin/python3'
Jan 31 05:55:05 compute-0 sudo[90969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:05 compute-0 python3[90971]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:06 compute-0 podman[90972]: 2026-01-31 05:55:06.017916272 +0000 UTC m=+0.088030787 container create fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:06 compute-0 podman[90972]: 2026-01-31 05:55:05.964624961 +0000 UTC m=+0.034739496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:06 compute-0 systemd[1]: Started libpod-conmon-fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e.scope.
Jan 31 05:55:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe929738ae18c52c17f5e612b706c6b042bd3e6163b3a960a3a2da04a49badd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe929738ae18c52c17f5e612b706c6b042bd3e6163b3a960a3a2da04a49badd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:06 compute-0 podman[90972]: 2026-01-31 05:55:06.301183952 +0000 UTC m=+0.371298447 container init fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 05:55:06 compute-0 podman[90972]: 2026-01-31 05:55:06.309727959 +0000 UTC m=+0.379842444 container start fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:06 compute-0 podman[90972]: 2026-01-31 05:55:06.378011377 +0000 UTC m=+0.448125862 container attach fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 05:55:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 05:55:06 compute-0 ceph-mon[75251]: pgmap v50: 4 pgs: 2 active+clean, 1 creating+peering, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:06 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1657348418' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:06 compute-0 ceph-mon[75251]: osdmap e20: 3 total, 3 up, 3 in
Jan 31 05:55:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 31 05:55:06 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 31 05:55:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:55:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1639833883' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v53: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 05:55:07 compute-0 ceph-mon[75251]: osdmap e21: 3 total, 3 up, 3 in
Jan 31 05:55:07 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1639833883' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:07 compute-0 ceph-mon[75251]: pgmap v53: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1639833883' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 31 05:55:07 compute-0 suspicious_borg[90987]: pool 'cephfs.cephfs.meta' created
Jan 31 05:55:07 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 31 05:55:07 compute-0 systemd[1]: libpod-fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e.scope: Deactivated successfully.
Jan 31 05:55:07 compute-0 podman[90972]: 2026-01-31 05:55:07.674233113 +0000 UTC m=+1.744347668 container died fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe929738ae18c52c17f5e612b706c6b042bd3e6163b3a960a3a2da04a49badd-merged.mount: Deactivated successfully.
Jan 31 05:55:07 compute-0 podman[90972]: 2026-01-31 05:55:07.722327539 +0000 UTC m=+1.792442054 container remove fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e (image=quay.io/ceph/ceph:v20, name=suspicious_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:55:07 compute-0 systemd[1]: libpod-conmon-fd9b4cc0494515a4cd96952d298a17c0b37e275eb153b76380028cb8ab1adf6e.scope: Deactivated successfully.
Jan 31 05:55:07 compute-0 sudo[90969]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:07 compute-0 sudo[91049]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasjfkaunzfrfyevtnfhysatmyphalwc ; /usr/bin/python3'
Jan 31 05:55:07 compute-0 sudo[91049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:08 compute-0 python3[91051]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:08 compute-0 podman[91052]: 2026-01-31 05:55:08.083879905 +0000 UTC m=+0.059566166 container create 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:08 compute-0 podman[91052]: 2026-01-31 05:55:08.05742015 +0000 UTC m=+0.033106461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:08 compute-0 systemd[1]: Started libpod-conmon-87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df.scope.
Jan 31 05:55:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee010f6b72424b2c02c58e37af4c84163ccd9a9131a7a3cf2f1af8ae57c69d3a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee010f6b72424b2c02c58e37af4c84163ccd9a9131a7a3cf2f1af8ae57c69d3a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:08 compute-0 podman[91052]: 2026-01-31 05:55:08.210495383 +0000 UTC m=+0.186181714 container init 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:08 compute-0 podman[91052]: 2026-01-31 05:55:08.218573958 +0000 UTC m=+0.194260239 container start 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:55:08 compute-0 podman[91052]: 2026-01-31 05:55:08.223228277 +0000 UTC m=+0.198914518 container attach 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 05:55:08 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1639833883' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:08 compute-0 ceph-mon[75251]: osdmap e22: 3 total, 3 up, 3 in
Jan 31 05:55:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 05:55:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3769067433' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 31 05:55:08 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 31 05:55:08 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:09 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3769067433' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 05:55:09 compute-0 ceph-mon[75251]: osdmap e23: 3 total, 3 up, 3 in
Jan 31 05:55:09 compute-0 ceph-mon[75251]: pgmap v56: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 05:55:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3769067433' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 31 05:55:09 compute-0 strange_leakey[91068]: pool 'cephfs.cephfs.data' created
Jan 31 05:55:09 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 31 05:55:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:09 compute-0 systemd[1]: libpod-87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df.scope: Deactivated successfully.
Jan 31 05:55:09 compute-0 podman[91052]: 2026-01-31 05:55:09.704703621 +0000 UTC m=+1.680389882 container died 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee010f6b72424b2c02c58e37af4c84163ccd9a9131a7a3cf2f1af8ae57c69d3a-merged.mount: Deactivated successfully.
Jan 31 05:55:09 compute-0 podman[91052]: 2026-01-31 05:55:09.785350481 +0000 UTC m=+1.761036742 container remove 87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df (image=quay.io/ceph/ceph:v20, name=strange_leakey, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 05:55:09 compute-0 systemd[1]: libpod-conmon-87c06ed2880ce07a70027dfbe77294c42dd719e270a5cf564d6082410f4a00df.scope: Deactivated successfully.
Jan 31 05:55:09 compute-0 sudo[91049]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:09 compute-0 sudo[91131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axorkbmehtbwumdwegkywizuonhcyniu ; /usr/bin/python3'
Jan 31 05:55:09 compute-0 sudo[91131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:10 compute-0 python3[91133]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:10 compute-0 podman[91134]: 2026-01-31 05:55:10.240684763 +0000 UTC m=+0.065047218 container create 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:55:10 compute-0 systemd[1]: Started libpod-conmon-17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca.scope.
Jan 31 05:55:10 compute-0 podman[91134]: 2026-01-31 05:55:10.211385329 +0000 UTC m=+0.035747804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1296bca2659576d73ce43c4198963f9ce30c7cd778e3e2ed6dbc98745acbb321/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1296bca2659576d73ce43c4198963f9ce30c7cd778e3e2ed6dbc98745acbb321/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:10 compute-0 podman[91134]: 2026-01-31 05:55:10.341094583 +0000 UTC m=+0.165457078 container init 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:10 compute-0 podman[91134]: 2026-01-31 05:55:10.349290721 +0000 UTC m=+0.173653176 container start 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:10 compute-0 podman[91134]: 2026-01-31 05:55:10.353160288 +0000 UTC m=+0.177522793 container attach 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:55:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 05:55:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 31 05:55:10 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 31 05:55:10 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3769067433' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 05:55:10 compute-0 ceph-mon[75251]: osdmap e24: 3 total, 3 up, 3 in
Jan 31 05:55:10 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 31 05:55:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613584137' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 05:55:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 05:55:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1613584137' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 05:55:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 31 05:55:11 compute-0 beautiful_elbakyan[91149]: enabled application 'rbd' on pool 'vms'
Jan 31 05:55:11 compute-0 ceph-mon[75251]: osdmap e25: 3 total, 3 up, 3 in
Jan 31 05:55:11 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1613584137' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 05:55:11 compute-0 ceph-mon[75251]: pgmap v59: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:11 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 31 05:55:11 compute-0 systemd[1]: libpod-17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca.scope: Deactivated successfully.
Jan 31 05:55:11 compute-0 podman[91134]: 2026-01-31 05:55:11.742835492 +0000 UTC m=+1.567197927 container died 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1296bca2659576d73ce43c4198963f9ce30c7cd778e3e2ed6dbc98745acbb321-merged.mount: Deactivated successfully.
Jan 31 05:55:11 compute-0 podman[91134]: 2026-01-31 05:55:11.794309922 +0000 UTC m=+1.618672387 container remove 17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca (image=quay.io/ceph/ceph:v20, name=beautiful_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:11 compute-0 systemd[1]: libpod-conmon-17eaf285d4af2f4027376e21f6d02bdbbcd46cad4dc8506ff024b24a6f5b3bca.scope: Deactivated successfully.
Jan 31 05:55:11 compute-0 sudo[91131]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:11 compute-0 sudo[91207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzyvavijkjazhcmdciuirbrwsfbeuqj ; /usr/bin/python3'
Jan 31 05:55:11 compute-0 sudo[91207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:12 compute-0 python3[91209]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.147635609 +0000 UTC m=+0.043297174 container create 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:12 compute-0 systemd[1]: Started libpod-conmon-61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3.scope.
Jan 31 05:55:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb52b16bce911766f5e10b5d8ce2f00e61d83f52298d9e0b56dbb57256c14e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98bb52b16bce911766f5e10b5d8ce2f00e61d83f52298d9e0b56dbb57256c14e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.130813742 +0000 UTC m=+0.026475287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.235987394 +0000 UTC m=+0.131648969 container init 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.2408786 +0000 UTC m=+0.136540145 container start 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.24698267 +0000 UTC m=+0.142644235 container attach 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 31 05:55:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2594179635' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 05:55:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 05:55:12 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2594179635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 05:55:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 31 05:55:12 compute-0 vibrant_turing[91225]: enabled application 'rbd' on pool 'volumes'
Jan 31 05:55:12 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 31 05:55:12 compute-0 systemd[1]: libpod-61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3.scope: Deactivated successfully.
Jan 31 05:55:12 compute-0 podman[91210]: 2026-01-31 05:55:12.891521489 +0000 UTC m=+0.787183054 container died 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:55:12 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1613584137' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 05:55:12 compute-0 ceph-mon[75251]: osdmap e26: 3 total, 3 up, 3 in
Jan 31 05:55:12 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2594179635' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 05:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-98bb52b16bce911766f5e10b5d8ce2f00e61d83f52298d9e0b56dbb57256c14e-merged.mount: Deactivated successfully.
Jan 31 05:55:13 compute-0 podman[91210]: 2026-01-31 05:55:13.031608431 +0000 UTC m=+0.927269966 container remove 61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3 (image=quay.io/ceph/ceph:v20, name=vibrant_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:13 compute-0 systemd[1]: libpod-conmon-61f98c867611f58ed710dbcae3003cb82e7bf3ca72607d7c56c02853797c98e3.scope: Deactivated successfully.
Jan 31 05:55:13 compute-0 sudo[91207]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:13 compute-0 sudo[91286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaqojtbofdwlkbavsifykqprzealcisr ; /usr/bin/python3'
Jan 31 05:55:13 compute-0 sudo[91286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:13 compute-0 python3[91288]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.387305374 +0000 UTC m=+0.074476061 container create 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.331839483 +0000 UTC m=+0.019010160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:13 compute-0 systemd[1]: Started libpod-conmon-92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac.scope.
Jan 31 05:55:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab16671c44c4a9a1c0c6b9b1ab12a1429708aaf13efa7b92c2755f4a29b109c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab16671c44c4a9a1c0c6b9b1ab12a1429708aaf13efa7b92c2755f4a29b109c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.513207832 +0000 UTC m=+0.200378539 container init 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.520391462 +0000 UTC m=+0.207562119 container start 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.548148483 +0000 UTC m=+0.235319140 container attach 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:55:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 31 05:55:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2184760944' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 05:55:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 05:55:13 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2594179635' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 05:55:13 compute-0 ceph-mon[75251]: osdmap e27: 3 total, 3 up, 3 in
Jan 31 05:55:13 compute-0 ceph-mon[75251]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:13 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2184760944' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 05:55:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2184760944' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 05:55:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 31 05:55:13 compute-0 funny_jang[91305]: enabled application 'rbd' on pool 'backups'
Jan 31 05:55:13 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 31 05:55:13 compute-0 systemd[1]: libpod-92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac.scope: Deactivated successfully.
Jan 31 05:55:13 compute-0 podman[91289]: 2026-01-31 05:55:13.976419653 +0000 UTC m=+0.663590350 container died 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab16671c44c4a9a1c0c6b9b1ab12a1429708aaf13efa7b92c2755f4a29b109c9-merged.mount: Deactivated successfully.
Jan 31 05:55:14 compute-0 podman[91289]: 2026-01-31 05:55:14.013841982 +0000 UTC m=+0.701012680 container remove 92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac (image=quay.io/ceph/ceph:v20, name=funny_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:55:14 compute-0 systemd[1]: libpod-conmon-92f61698e781875428e121cc792d477ca67d65fcc6ae45a7127a1ada5fe59aac.scope: Deactivated successfully.
Jan 31 05:55:14 compute-0 sudo[91286]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:14 compute-0 sudo[91366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oemwsynlvanvxvujuascqhitebtfbraq ; /usr/bin/python3'
Jan 31 05:55:14 compute-0 sudo[91366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:14 compute-0 python3[91368]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:14 compute-0 podman[91369]: 2026-01-31 05:55:14.338423431 +0000 UTC m=+0.045999829 container create 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:14 compute-0 systemd[1]: Started libpod-conmon-2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519.scope.
Jan 31 05:55:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb42448e4335aafa587879e363ff56d228d83eb9486ecd68bcf9d27351a18d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cbb42448e4335aafa587879e363ff56d228d83eb9486ecd68bcf9d27351a18d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:14 compute-0 podman[91369]: 2026-01-31 05:55:14.406796471 +0000 UTC m=+0.114372909 container init 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:14 compute-0 podman[91369]: 2026-01-31 05:55:14.316712398 +0000 UTC m=+0.024288886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:14 compute-0 podman[91369]: 2026-01-31 05:55:14.411675867 +0000 UTC m=+0.119252305 container start 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:14 compute-0 podman[91369]: 2026-01-31 05:55:14.415503673 +0000 UTC m=+0.123080111 container attach 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:55:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 31 05:55:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3603442392' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 05:55:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 05:55:14 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2184760944' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 05:55:14 compute-0 ceph-mon[75251]: osdmap e28: 3 total, 3 up, 3 in
Jan 31 05:55:14 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3603442392' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 05:55:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3603442392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 05:55:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 05:55:14 compute-0 festive_beaver[91384]: enabled application 'rbd' on pool 'images'
Jan 31 05:55:14 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 05:55:15 compute-0 systemd[1]: libpod-2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519.scope: Deactivated successfully.
Jan 31 05:55:15 compute-0 podman[91409]: 2026-01-31 05:55:15.053450439 +0000 UTC m=+0.035199339 container died 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cbb42448e4335aafa587879e363ff56d228d83eb9486ecd68bcf9d27351a18d-merged.mount: Deactivated successfully.
Jan 31 05:55:15 compute-0 podman[91409]: 2026-01-31 05:55:15.090629312 +0000 UTC m=+0.072378142 container remove 2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519 (image=quay.io/ceph/ceph:v20, name=festive_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:15 compute-0 systemd[1]: libpod-conmon-2469f6c6cbd55bc8ef2d99cbbb5e2e4bbfc9360552b5ba94fbbb33be26c49519.scope: Deactivated successfully.
Jan 31 05:55:15 compute-0 sudo[91366]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:15 compute-0 sudo[91448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdkbdrgmvyfmxzrvnlqgoqqxqfbzvyg ; /usr/bin/python3'
Jan 31 05:55:15 compute-0 sudo[91448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:15 compute-0 python3[91450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:15 compute-0 podman[91451]: 2026-01-31 05:55:15.406297343 +0000 UTC m=+0.040768664 container create 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:15 compute-0 systemd[1]: Started libpod-conmon-59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff.scope.
Jan 31 05:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3bb60f9e5bfdcf87498c1a25806f89288beb6b8704b382d6c5b1444ca5389d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3bb60f9e5bfdcf87498c1a25806f89288beb6b8704b382d6c5b1444ca5389d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:15 compute-0 podman[91451]: 2026-01-31 05:55:15.382576174 +0000 UTC m=+0.017047505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:15 compute-0 podman[91451]: 2026-01-31 05:55:15.551623561 +0000 UTC m=+0.186094912 container init 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:55:15 compute-0 podman[91451]: 2026-01-31 05:55:15.555123018 +0000 UTC m=+0.189594349 container start 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:55:15 compute-0 podman[91451]: 2026-01-31 05:55:15.741726063 +0000 UTC m=+0.376197414 container attach 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:55:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 31 05:55:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3192030611' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 05:55:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 05:55:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3603442392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 05:55:16 compute-0 ceph-mon[75251]: osdmap e29: 3 total, 3 up, 3 in
Jan 31 05:55:16 compute-0 ceph-mon[75251]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3192030611' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 05:55:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3192030611' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 05:55:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 05:55:16 compute-0 gifted_lichterman[91466]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 05:55:16 compute-0 systemd[1]: libpod-59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff.scope: Deactivated successfully.
Jan 31 05:55:16 compute-0 conmon[91466]: conmon 59833f07758e0204db49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff.scope/container/memory.events
Jan 31 05:55:16 compute-0 podman[91451]: 2026-01-31 05:55:16.903665278 +0000 UTC m=+1.538136609 container died 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:55:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:17 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 05:55:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3192030611' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 05:55:18 compute-0 ceph-mon[75251]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:18 compute-0 ceph-mon[75251]: osdmap e30: 3 total, 3 up, 3 in
Jan 31 05:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3bb60f9e5bfdcf87498c1a25806f89288beb6b8704b382d6c5b1444ca5389d-merged.mount: Deactivated successfully.
Jan 31 05:55:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:19 compute-0 podman[91451]: 2026-01-31 05:55:19.313473977 +0000 UTC m=+3.947945318 container remove 59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff (image=quay.io/ceph/ceph:v20, name=gifted_lichterman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 05:55:19 compute-0 sudo[91448]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:19 compute-0 systemd[1]: libpod-conmon-59833f07758e0204db4909ee3f20515a98e417dcda2dcb63ddf2e145e5b720ff.scope: Deactivated successfully.
Jan 31 05:55:19 compute-0 sudo[91527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgeyednfquajsbvfeurywdqkaclzvdj ; /usr/bin/python3'
Jan 31 05:55:19 compute-0 sudo[91527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:19 compute-0 python3[91529]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:19 compute-0 podman[91530]: 2026-01-31 05:55:19.638900999 +0000 UTC m=+0.049061544 container create df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:55:19 compute-0 systemd[1]: Started libpod-conmon-df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a.scope.
Jan 31 05:55:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d870eb39c2649daa9562132837339be0ba274e4ea38f07db7413b503912da28e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d870eb39c2649daa9562132837339be0ba274e4ea38f07db7413b503912da28e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:19 compute-0 podman[91530]: 2026-01-31 05:55:19.620674792 +0000 UTC m=+0.030835327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:19 compute-0 podman[91530]: 2026-01-31 05:55:19.718574733 +0000 UTC m=+0.128735318 container init df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:19 compute-0 podman[91530]: 2026-01-31 05:55:19.721905655 +0000 UTC m=+0.132066200 container start df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:55:19 compute-0 podman[91530]: 2026-01-31 05:55:19.725906026 +0000 UTC m=+0.136066571 container attach df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:19 compute-0 ceph-mon[75251]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 31 05:55:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1785277188' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 05:55:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 05:55:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1785277188' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 05:55:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1785277188' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 05:55:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 05:55:20 compute-0 adoring_easley[91545]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 05:55:20 compute-0 systemd[1]: libpod-df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a.scope: Deactivated successfully.
Jan 31 05:55:20 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 05:55:20 compute-0 podman[91570]: 2026-01-31 05:55:20.945170904 +0000 UTC m=+0.032427282 container died df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d870eb39c2649daa9562132837339be0ba274e4ea38f07db7413b503912da28e-merged.mount: Deactivated successfully.
Jan 31 05:55:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:22 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1785277188' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 05:55:22 compute-0 ceph-mon[75251]: osdmap e31: 3 total, 3 up, 3 in
Jan 31 05:55:22 compute-0 ceph-mon[75251]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:22 compute-0 podman[91570]: 2026-01-31 05:55:22.858603671 +0000 UTC m=+1.945860029 container remove df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a (image=quay.io/ceph/ceph:v20, name=adoring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:55:22 compute-0 systemd[1]: libpod-conmon-df6eb41007eba14a9b9d1d0d5840cbad7c7a9a8ebbf125115c52cc4e39c3c89a.scope: Deactivated successfully.
Jan 31 05:55:22 compute-0 sudo[91527]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:23 compute-0 python3[91660]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:55:24 compute-0 python3[91731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838923.4909804-36573-274124761810643/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:55:24 compute-0 sudo[91831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdnikrriilccmqdyxvdnceqsphxulmr ; /usr/bin/python3'
Jan 31 05:55:24 compute-0 sudo[91831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:24 compute-0 ceph-mon[75251]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:25 compute-0 python3[91833]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:55:25 compute-0 sudo[91831]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:25 compute-0 sudo[91906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebhvcpjdnbazoclojyboolwlsgepgnnz ; /usr/bin/python3'
Jan 31 05:55:25 compute-0 sudo[91906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:25 compute-0 python3[91908]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838924.5469747-36587-15775738358613/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=16c42b87ec059509c474a8d9d3359225b8478205 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:55:25 compute-0 sudo[91906]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:25 compute-0 sudo[91956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbkxjaydawvxmjyiwnxbezgdfbtpyqko ; /usr/bin/python3'
Jan 31 05:55:25 compute-0 sudo[91956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:25 compute-0 python3[91958]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:26 compute-0 podman[91959]: 2026-01-31 05:55:26.027816909 +0000 UTC m=+0.036219597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:26 compute-0 ceph-mon[75251]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:26 compute-0 podman[91959]: 2026-01-31 05:55:26.367062525 +0000 UTC m=+0.375465173 container create 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:27 compute-0 systemd[1]: Started libpod-conmon-6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a.scope.
Jan 31 05:55:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eaef05b47d061c269cba35b06de33cc92c738d1c6315813847a7d80e130f648/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eaef05b47d061c269cba35b06de33cc92c738d1c6315813847a7d80e130f648/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eaef05b47d061c269cba35b06de33cc92c738d1c6315813847a7d80e130f648/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:27 compute-0 podman[91959]: 2026-01-31 05:55:27.251833759 +0000 UTC m=+1.260236457 container init 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 05:55:27 compute-0 podman[91959]: 2026-01-31 05:55:27.259900653 +0000 UTC m=+1.268303281 container start 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:27 compute-0 podman[91959]: 2026-01-31 05:55:27.352918958 +0000 UTC m=+1.361321556 container attach 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 05:55:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1278630741' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:55:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1278630741' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 05:55:27 compute-0 agitated_wescoff[91974]: 
Jan 31 05:55:27 compute-0 agitated_wescoff[91974]: [global]
Jan 31 05:55:27 compute-0 agitated_wescoff[91974]:         fsid = 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:55:27 compute-0 agitated_wescoff[91974]:         mon_host = 192.168.122.100
Jan 31 05:55:27 compute-0 agitated_wescoff[91974]:         rgw_keystone_api_version = 3
Jan 31 05:55:27 compute-0 systemd[1]: libpod-6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a.scope: Deactivated successfully.
Jan 31 05:55:27 compute-0 podman[91959]: 2026-01-31 05:55:27.756709747 +0000 UTC m=+1.765112405 container died 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:27 compute-0 sudo[92000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:27 compute-0 sudo[92000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:27 compute-0 sudo[92000]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:27 compute-0 sudo[92034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:55:27 compute-0 sudo[92034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eaef05b47d061c269cba35b06de33cc92c738d1c6315813847a7d80e130f648-merged.mount: Deactivated successfully.
Jan 31 05:55:28 compute-0 podman[91959]: 2026-01-31 05:55:28.000006607 +0000 UTC m=+2.008409225 container remove 6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a (image=quay.io/ceph/ceph:v20, name=agitated_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:28 compute-0 sudo[91956]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:28 compute-0 systemd[1]: libpod-conmon-6f7790435433fefb08a7abefe31e17f72f341ed1e8fa5606300af7f2b6dd0c5a.scope: Deactivated successfully.
Jan 31 05:55:28 compute-0 sudo[92126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlahiynifyrwnwfxqfbzgzqqygvftspo ; /usr/bin/python3'
Jan 31 05:55:28 compute-0 sudo[92126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:28 compute-0 ceph-mon[75251]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:28 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1278630741' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 05:55:28 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1278630741' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 05:55:28 compute-0 podman[92128]: 2026-01-31 05:55:28.296892276 +0000 UTC m=+0.124527971 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:28 compute-0 python3[92130]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:28 compute-0 podman[92147]: 2026-01-31 05:55:28.365571334 +0000 UTC m=+0.053894458 container create 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:28 compute-0 systemd[1]: Started libpod-conmon-22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a.scope.
Jan 31 05:55:28 compute-0 podman[92128]: 2026-01-31 05:55:28.405506524 +0000 UTC m=+0.233142149 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:55:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5dfebf4c4698b2aa98ab4f390526886d6f0d0426548493c3860fe950e2c7c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5dfebf4c4698b2aa98ab4f390526886d6f0d0426548493c3860fe950e2c7c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5dfebf4c4698b2aa98ab4f390526886d6f0d0426548493c3860fe950e2c7c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:28 compute-0 podman[92147]: 2026-01-31 05:55:28.338476111 +0000 UTC m=+0.026799335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:28 compute-0 podman[92147]: 2026-01-31 05:55:28.445886956 +0000 UTC m=+0.134210130 container init 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:55:28 compute-0 podman[92147]: 2026-01-31 05:55:28.45360866 +0000 UTC m=+0.141931784 container start 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:28 compute-0 podman[92147]: 2026-01-31 05:55:28.459496224 +0000 UTC m=+0.147819348 container attach 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:28 compute-0 sudo[92034]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:28 compute-0 sudo[92311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:28 compute-0 sudo[92311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:28 compute-0 sudo[92311]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:28 compute-0 sudo[92336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 05:55:28 compute-0 sudo[92336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 31 05:55:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/683232754' entity='client.admin' 
Jan 31 05:55:28 compute-0 angry_murdock[92160]: set ssl_option
Jan 31 05:55:29 compute-0 systemd[1]: libpod-22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a.scope: Deactivated successfully.
Jan 31 05:55:29 compute-0 podman[92147]: 2026-01-31 05:55:29.012287444 +0000 UTC m=+0.700610578 container died 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5dfebf4c4698b2aa98ab4f390526886d6f0d0426548493c3860fe950e2c7c6-merged.mount: Deactivated successfully.
Jan 31 05:55:29 compute-0 podman[92147]: 2026-01-31 05:55:29.058479397 +0000 UTC m=+0.746802521 container remove 22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a (image=quay.io/ceph/ceph:v20, name=angry_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:29 compute-0 systemd[1]: libpod-conmon-22027439a374a0684d281195d7509f28d264c8642861cb8c19562f496e9bbe2a.scope: Deactivated successfully.
Jan 31 05:55:29 compute-0 sudo[92126]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:29 compute-0 sudo[92414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drkxkudrhhemwrtbnvnugezrnipfjhfe ; /usr/bin/python3'
Jan 31 05:55:29 compute-0 sudo[92414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:29 compute-0 python3[92416]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:29 compute-0 sudo[92336]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.398401352 +0000 UTC m=+0.041268778 container create 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:29 compute-0 systemd[1]: Started libpod-conmon-4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de.scope.
Jan 31 05:55:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:29 compute-0 sudo[92449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:29 compute-0 sudo[92449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec19d9aada7cb26e28ac28f9256b173f33412413f4e1e003eef09c83b186abf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec19d9aada7cb26e28ac28f9256b173f33412413f4e1e003eef09c83b186abf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec19d9aada7cb26e28ac28f9256b173f33412413f4e1e003eef09c83b186abf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:29 compute-0 sudo[92449]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.380572497 +0000 UTC m=+0.023439933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.483902388 +0000 UTC m=+0.126769834 container init 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.48867057 +0000 UTC m=+0.131537996 container start 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.492894638 +0000 UTC m=+0.135762084 container attach 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:55:29 compute-0 sudo[92477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:55:29 compute-0 sudo[92477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.77224765 +0000 UTC m=+0.060883653 container create e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.730350196 +0000 UTC m=+0.018986239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:29 compute-0 systemd[1]: Started libpod-conmon-e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9.scope.
Jan 31 05:55:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:29 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:29 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:29 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/683232754' entity='client.admin' 
Jan 31 05:55:29 compute-0 ceph-mon[75251]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.885732883 +0000 UTC m=+0.174368906 container init e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.889810636 +0000 UTC m=+0.178446629 container start e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:55:29 compute-0 recursing_ptolemy[92552]: 167 167
Jan 31 05:55:29 compute-0 systemd[1]: libpod-e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9.scope: Deactivated successfully.
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.915276304 +0000 UTC m=+0.203912357 container attach e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:55:29 compute-0 podman[92535]: 2026-01-31 05:55:29.916064286 +0000 UTC m=+0.204700299 container died e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:29 compute-0 interesting_wing[92462]: Scheduled rgw.rgw update...
Jan 31 05:55:29 compute-0 systemd[1]: libpod-4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de.scope: Deactivated successfully.
Jan 31 05:55:29 compute-0 podman[92434]: 2026-01-31 05:55:29.981038621 +0000 UTC m=+0.623906047 container died 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb095c0e426f899c538ff04ace696208c49e6a097445a5a8c067fe2dd32c7a6-merged.mount: Deactivated successfully.
Jan 31 05:55:30 compute-0 podman[92535]: 2026-01-31 05:55:30.184427173 +0000 UTC m=+0.473063166 container remove e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:55:30 compute-0 systemd[1]: libpod-conmon-e051eeb71de6379c8f8b095b0de8984ff544174e67800d3f2d9ad9af075b66d9.scope: Deactivated successfully.
Jan 31 05:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec19d9aada7cb26e28ac28f9256b173f33412413f4e1e003eef09c83b186abf-merged.mount: Deactivated successfully.
Jan 31 05:55:30 compute-0 podman[92434]: 2026-01-31 05:55:30.366557713 +0000 UTC m=+1.009425139 container remove 4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de (image=quay.io/ceph/ceph:v20, name=interesting_wing, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:55:30 compute-0 sudo[92414]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:30 compute-0 podman[92594]: 2026-01-31 05:55:30.333477154 +0000 UTC m=+0.059414602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:30 compute-0 podman[92594]: 2026-01-31 05:55:30.435655463 +0000 UTC m=+0.161592881 container create 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:30 compute-0 systemd[1]: libpod-conmon-4113476d030ed7c217ca7772cdda861940943a6acc7ce71727cc87f4de0bd4de.scope: Deactivated successfully.
Jan 31 05:55:30 compute-0 systemd[1]: Started libpod-conmon-3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec.scope.
Jan 31 05:55:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:30 compute-0 podman[92594]: 2026-01-31 05:55:30.634415506 +0000 UTC m=+0.360352934 container init 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:55:30 compute-0 podman[92594]: 2026-01-31 05:55:30.643626652 +0000 UTC m=+0.369564070 container start 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 05:55:30 compute-0 podman[92594]: 2026-01-31 05:55:30.662136386 +0000 UTC m=+0.388073824 container attach 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:30 compute-0 ceph-mon[75251]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:30 compute-0 ceph-mon[75251]: Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:31 compute-0 great_hofstadter[92611]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:55:31 compute-0 great_hofstadter[92611]: --> All data devices are unavailable
Jan 31 05:55:31 compute-0 systemd[1]: libpod-3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec.scope: Deactivated successfully.
Jan 31 05:55:31 compute-0 podman[92594]: 2026-01-31 05:55:31.0916085 +0000 UTC m=+0.817545938 container died 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-db8733e3bbe833f6d45360e0a00b47b436b6d48a79338264dd53794feeb418ad-merged.mount: Deactivated successfully.
Jan 31 05:55:31 compute-0 python3[92717]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:55:31 compute-0 podman[92594]: 2026-01-31 05:55:31.354566975 +0000 UTC m=+1.080504393 container remove 3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:55:31 compute-0 systemd[1]: libpod-conmon-3ebba9dfbddc3dbb3f196c58df7a2054e0430d3f934aab7ccd5698e83b00e9ec.scope: Deactivated successfully.
Jan 31 05:55:31 compute-0 sudo[92477]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:31 compute-0 sudo[92744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:31 compute-0 sudo[92744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:31 compute-0 sudo[92744]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:31 compute-0 sudo[92790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:55:31 compute-0 sudo[92790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:31 compute-0 python3[92841]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838931.0688102-36628-161571104821955/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:55:31 compute-0 podman[92877]: 2026-01-31 05:55:31.736741214 +0000 UTC m=+0.016233682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:31 compute-0 podman[92877]: 2026-01-31 05:55:31.883712508 +0000 UTC m=+0.163204976 container create a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:55:32 compute-0 sudo[92914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-betunbqhjojoqcobheczunlnjqkuulwg ; /usr/bin/python3'
Jan 31 05:55:32 compute-0 sudo[92914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:32 compute-0 ceph-mon[75251]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:32 compute-0 systemd[1]: Started libpod-conmon-a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1.scope.
Jan 31 05:55:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:32 compute-0 podman[92877]: 2026-01-31 05:55:32.138444106 +0000 UTC m=+0.417936664 container init a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:32 compute-0 podman[92877]: 2026-01-31 05:55:32.146577032 +0000 UTC m=+0.426069540 container start a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:55:32 compute-0 loving_gauss[92919]: 167 167
Jan 31 05:55:32 compute-0 systemd[1]: libpod-a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1.scope: Deactivated successfully.
Jan 31 05:55:32 compute-0 python3[92916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:32 compute-0 podman[92877]: 2026-01-31 05:55:32.178014805 +0000 UTC m=+0.457507283 container attach a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:32 compute-0 podman[92877]: 2026-01-31 05:55:32.178680024 +0000 UTC m=+0.458172502 container died a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-764b754d4b17f011f080f4ed3b4291dfff20b998743095f31144917a0708a1ec-merged.mount: Deactivated successfully.
Jan 31 05:55:32 compute-0 podman[92877]: 2026-01-31 05:55:32.546385061 +0000 UTC m=+0.825877529 container remove a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:55:32 compute-0 systemd[1]: libpod-conmon-a6711263d64d37dac29010c710985fde0c7cd8b772626b2967c11ef619fb8ec1.scope: Deactivated successfully.
Jan 31 05:55:32 compute-0 podman[92925]: 2026-01-31 05:55:32.64822382 +0000 UTC m=+0.475747440 container create ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:55:32 compute-0 systemd[1]: Started libpod-conmon-ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906.scope.
Jan 31 05:55:32 compute-0 podman[92952]: 2026-01-31 05:55:32.700912254 +0000 UTC m=+0.056591643 container create 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da97cbdc4992614df9f254a68f3dc6245b7fd85ba2061b9150a73f068dc40493/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da97cbdc4992614df9f254a68f3dc6245b7fd85ba2061b9150a73f068dc40493/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da97cbdc4992614df9f254a68f3dc6245b7fd85ba2061b9150a73f068dc40493/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 podman[92925]: 2026-01-31 05:55:32.633925873 +0000 UTC m=+0.461449543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:32 compute-0 systemd[1]: Started libpod-conmon-91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce.scope.
Jan 31 05:55:32 compute-0 podman[92925]: 2026-01-31 05:55:32.73890086 +0000 UTC m=+0.566424540 container init ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:55:32 compute-0 podman[92925]: 2026-01-31 05:55:32.745181034 +0000 UTC m=+0.572704654 container start ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:55:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2201be970013d3e81ed58ba77784d9db464fa402d4b7edc9c0f9855bcf1bb23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2201be970013d3e81ed58ba77784d9db464fa402d4b7edc9c0f9855bcf1bb23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2201be970013d3e81ed58ba77784d9db464fa402d4b7edc9c0f9855bcf1bb23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2201be970013d3e81ed58ba77784d9db464fa402d4b7edc9c0f9855bcf1bb23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:32 compute-0 podman[92925]: 2026-01-31 05:55:32.766303291 +0000 UTC m=+0.593826911 container attach ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:55:32 compute-0 podman[92952]: 2026-01-31 05:55:32.676411983 +0000 UTC m=+0.032091422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:32 compute-0 podman[92952]: 2026-01-31 05:55:32.77850619 +0000 UTC m=+0.134185589 container init 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:32 compute-0 podman[92952]: 2026-01-31 05:55:32.783344715 +0000 UTC m=+0.139024094 container start 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:32 compute-0 podman[92952]: 2026-01-31 05:55:32.788333803 +0000 UTC m=+0.144013202 container attach 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]: {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     "0": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "devices": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "/dev/loop3"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             ],
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_name": "ceph_lv0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_size": "21470642176",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "name": "ceph_lv0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "tags": {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.crush_device_class": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.encrypted": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_id": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.vdo": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.with_tpm": "0"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             },
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "vg_name": "ceph_vg0"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         }
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     ],
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     "1": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "devices": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "/dev/loop4"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             ],
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_name": "ceph_lv1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_size": "21470642176",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "name": "ceph_lv1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "tags": {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.crush_device_class": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.encrypted": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_id": "1",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.vdo": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.with_tpm": "0"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             },
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "vg_name": "ceph_vg1"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         }
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     ],
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     "2": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "devices": [
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "/dev/loop5"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             ],
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_name": "ceph_lv2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_size": "21470642176",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "name": "ceph_lv2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "tags": {
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.crush_device_class": "",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.encrypted": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osd_id": "2",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.vdo": "0",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:                 "ceph.with_tpm": "0"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             },
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "type": "block",
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:             "vg_name": "ceph_vg2"
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:         }
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]:     ]
Jan 31 05:55:33 compute-0 vigorous_wilson[92975]: }
Jan 31 05:55:33 compute-0 systemd[1]: libpod-91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 podman[92952]: 2026-01-31 05:55:33.051637969 +0000 UTC m=+0.407317338 container died 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2201be970013d3e81ed58ba77784d9db464fa402d4b7edc9c0f9855bcf1bb23-merged.mount: Deactivated successfully.
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:33 compute-0 podman[92952]: 2026-01-31 05:55:33.121078349 +0000 UTC m=+0.476757758 container remove 91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:55:33 compute-0 systemd[1]: libpod-conmon-91fe7d74c3a7e3d7c92e5cb85cb958a77a55e5f6add21262cf41475b6ecb89ce.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 sudo[92790]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 05:55:33 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0[75247]: 2026-01-31T05:55:33.190+0000 7f9160192640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e2 new map
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-31T05:55:33:192010+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T05:55:33.191774+0000
                                           modified        2026-01-31T05:55:33.191774+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 05:55:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:33 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 05:55:33 compute-0 sudo[93016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:33 compute-0 sudo[93016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:33 compute-0 sudo[93016]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:33 compute-0 systemd[1]: libpod-ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 podman[92925]: 2026-01-31 05:55:33.232550866 +0000 UTC m=+1.060074486 container died ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-da97cbdc4992614df9f254a68f3dc6245b7fd85ba2061b9150a73f068dc40493-merged.mount: Deactivated successfully.
Jan 31 05:55:33 compute-0 podman[92925]: 2026-01-31 05:55:33.279547762 +0000 UTC m=+1.107071402 container remove ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906 (image=quay.io/ceph/ceph:v20, name=romantic_euler, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:55:33 compute-0 sudo[93043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:55:33 compute-0 systemd[1]: libpod-conmon-ee273c800708c0e923bc78f9d829fe06daf179246d17bdb45bd15bf7b590a906.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 sudo[93043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:33 compute-0 sudo[92914]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:33 compute-0 sudo[93102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbseiljjqlplhqhpadfkxozoakxcfeux ; /usr/bin/python3'
Jan 31 05:55:33 compute-0 sudo[93102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:33 compute-0 python3[93104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.576808502 +0000 UTC m=+0.087419420 container create 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.514180142 +0000 UTC m=+0.024791080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:33 compute-0 systemd[1]: Started libpod-conmon-9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990.scope.
Jan 31 05:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:33 compute-0 podman[93133]: 2026-01-31 05:55:33.628495908 +0000 UTC m=+0.072808074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:33 compute-0 podman[93133]: 2026-01-31 05:55:33.749074628 +0000 UTC m=+0.193386824 container create 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.761969337 +0000 UTC m=+0.272580255 container init 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.766223635 +0000 UTC m=+0.276834553 container start 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:33 compute-0 condescending_austin[93148]: 167 167
Jan 31 05:55:33 compute-0 systemd[1]: libpod-9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.780488121 +0000 UTC m=+0.291099089 container attach 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.78081206 +0000 UTC m=+0.291423008 container died 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:55:33 compute-0 systemd[1]: Started libpod-conmon-6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d.scope.
Jan 31 05:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d2225fb751f928664bcdba60b33cbc7bc408fe918617fad46740cfaeed4fcb2-merged.mount: Deactivated successfully.
Jan 31 05:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cadafecd4537753c5a10dcf94096a50791c6567c776e09bb6f6c5a03615cafc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cadafecd4537753c5a10dcf94096a50791c6567c776e09bb6f6c5a03615cafc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cadafecd4537753c5a10dcf94096a50791c6567c776e09bb6f6c5a03615cafc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:33 compute-0 podman[93118]: 2026-01-31 05:55:33.841944279 +0000 UTC m=+0.352555227 container remove 9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:55:33 compute-0 systemd[1]: libpod-conmon-9465a6fbb65dfa6e6320a29724736657faac3941de5329bcc7e301698b1ae990.scope: Deactivated successfully.
Jan 31 05:55:33 compute-0 podman[93133]: 2026-01-31 05:55:33.858828878 +0000 UTC m=+0.303141064 container init 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:33 compute-0 podman[93133]: 2026-01-31 05:55:33.861955135 +0000 UTC m=+0.306267311 container start 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:55:33 compute-0 podman[93133]: 2026-01-31 05:55:33.867951081 +0000 UTC m=+0.312263247 container attach 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.013469565 +0000 UTC m=+0.041982338 container create ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:34 compute-0 systemd[1]: Started libpod-conmon-ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe.scope.
Jan 31 05:55:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da66845ee23f3ef72d2eb46494058ad8acf3677cb4b90db3532d803051c571d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da66845ee23f3ef72d2eb46494058ad8acf3677cb4b90db3532d803051c571d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da66845ee23f3ef72d2eb46494058ad8acf3677cb4b90db3532d803051c571d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da66845ee23f3ef72d2eb46494058ad8acf3677cb4b90db3532d803051c571d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.000515125 +0000 UTC m=+0.029027908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.100184534 +0000 UTC m=+0.128697357 container init ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.106011126 +0000 UTC m=+0.134523909 container start ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.110374417 +0000 UTC m=+0.138887200 container attach ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 05:55:34 compute-0 ceph-mon[75251]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 05:55:34 compute-0 ceph-mon[75251]: osdmap e32: 3 total, 3 up, 3 in
Jan 31 05:55:34 compute-0 ceph-mon[75251]: fsmap cephfs:0
Jan 31 05:55:34 compute-0 ceph-mon[75251]: Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:34 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:34 compute-0 ceph-mgr[75550]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:34 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:34 compute-0 wizardly_satoshi[93165]: Scheduled mds.cephfs update...
Jan 31 05:55:34 compute-0 systemd[1]: libpod-6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d.scope: Deactivated successfully.
Jan 31 05:55:34 compute-0 podman[93133]: 2026-01-31 05:55:34.283611341 +0000 UTC m=+0.727923507 container died 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cadafecd4537753c5a10dcf94096a50791c6567c776e09bb6f6c5a03615cafc-merged.mount: Deactivated successfully.
Jan 31 05:55:34 compute-0 podman[93133]: 2026-01-31 05:55:34.323704095 +0000 UTC m=+0.768016261 container remove 6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d (image=quay.io/ceph/ceph:v20, name=wizardly_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:55:34 compute-0 systemd[1]: libpod-conmon-6cbab673f1e26f523936765205e9b546cdc84076bb75e60d59d81f1f0038329d.scope: Deactivated successfully.
Jan 31 05:55:34 compute-0 sudo[93102]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:34 compute-0 lvm[93302]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:55:34 compute-0 lvm[93302]: VG ceph_vg0 finished
Jan 31 05:55:34 compute-0 lvm[93305]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:55:34 compute-0 lvm[93305]: VG ceph_vg1 finished
Jan 31 05:55:34 compute-0 lvm[93307]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:55:34 compute-0 lvm[93307]: VG ceph_vg2 finished
Jan 31 05:55:34 compute-0 inspiring_cray[93214]: {}
Jan 31 05:55:34 compute-0 systemd[1]: libpod-ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe.scope: Deactivated successfully.
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.789780824 +0000 UTC m=+0.818293597 container died ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da66845ee23f3ef72d2eb46494058ad8acf3677cb4b90db3532d803051c571d-merged.mount: Deactivated successfully.
Jan 31 05:55:34 compute-0 podman[93180]: 2026-01-31 05:55:34.869993373 +0000 UTC m=+0.898506136 container remove ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:55:34 compute-0 systemd[1]: libpod-conmon-ea0406d82537ba55f73394a708688220d2c51730f27be020ddcb6d783acfeafe.scope: Deactivated successfully.
Jan 31 05:55:34 compute-0 sudo[93043]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 sudo[93322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:55:35 compute-0 sudo[93322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:35 compute-0 sudo[93322]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:35 compute-0 sudo[93347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:35 compute-0 sudo[93347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:35 compute-0 sudo[93347]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:35 compute-0 sudo[93372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:55:35 compute-0 sudo[93372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:35 compute-0 sudo[93472]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpnctbmgomsnilhfaxfmwahozcovbak ; /usr/bin/python3'
Jan 31 05:55:35 compute-0 sudo[93472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:35 compute-0 ceph-mon[75251]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 05:55:35 compute-0 ceph-mon[75251]: Saving service mds.cephfs spec with placement compute-0
Jan 31 05:55:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 python3[93474]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 05:55:35 compute-0 sudo[93472]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:35 compute-0 podman[93519]: 2026-01-31 05:55:35.455645716 +0000 UTC m=+0.046600446 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:35 compute-0 podman[93519]: 2026-01-31 05:55:35.540629297 +0000 UTC m=+0.131584037 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:35 compute-0 sudo[93615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blfjezxxgtqrxdbdwnepwyqmslkbziny ; /usr/bin/python3'
Jan 31 05:55:35 compute-0 sudo[93615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:35 compute-0 python3[93626]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769838935.159796-36676-20151161694853/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=1e8d85a566d029d8407896eea1d32944048a7d4b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:55:35 compute-0 sudo[93615]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:35 compute-0 sudo[93372]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:35 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:55:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:36 compute-0 sudo[93791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xipfxwyyenvlabgnfudzwoagwlqeukas ; /usr/bin/python3'
Jan 31 05:55:36 compute-0 sudo[93791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:36 compute-0 sudo[93790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:36 compute-0 sudo[93790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:36 compute-0 sudo[93790]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:36 compute-0 sudo[93818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:55:36 compute-0 sudo[93818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:36 compute-0 python3[93805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.202555699 +0000 UTC m=+0.029947543 container create 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:36 compute-0 systemd[1]: Started libpod-conmon-0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735.scope.
Jan 31 05:55:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146047349b2e89190216a14126ba3ee4532939602a8c0b26147f93fafce7cc3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146047349b2e89190216a14126ba3ee4532939602a8c0b26147f93fafce7cc3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.267880885 +0000 UTC m=+0.095272769 container init 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.271816714 +0000 UTC m=+0.099208558 container start 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.277925704 +0000 UTC m=+0.105317548 container attach 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.188768326 +0000 UTC m=+0.016160190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.38793299 +0000 UTC m=+0.082195205 container create 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.3209918 +0000 UTC m=+0.015254035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:36 compute-0 systemd[1]: Started libpod-conmon-80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907.scope.
Jan 31 05:55:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.515751722 +0000 UTC m=+0.210013957 container init 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.520684619 +0000 UTC m=+0.214946834 container start 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:36 compute-0 exciting_mcnulty[93909]: 167 167
Jan 31 05:55:36 compute-0 systemd[1]: libpod-80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907.scope: Deactivated successfully.
Jan 31 05:55:36 compute-0 conmon[93909]: conmon 80c019f5d1ae745310bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907.scope/container/memory.events
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.557701887 +0000 UTC m=+0.251964112 container attach 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.557995256 +0000 UTC m=+0.252257461 container died 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:55:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb9b52ad330ee00841b3cdbdd847b352266dbfd7306155b5b0ab1987b1b92a22-merged.mount: Deactivated successfully.
Jan 31 05:55:36 compute-0 podman[93874]: 2026-01-31 05:55:36.625580703 +0000 UTC m=+0.319842918 container remove 80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_mcnulty, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 05:55:36 compute-0 systemd[1]: libpod-conmon-80c019f5d1ae745310bca84b9bad1366082472614242c8c16bfed00638e58907.scope: Deactivated successfully.
Jan 31 05:55:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3753418879' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 05:55:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3753418879' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 05:55:36 compute-0 systemd[1]: libpod-0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735.scope: Deactivated successfully.
Jan 31 05:55:36 compute-0 podman[93935]: 2026-01-31 05:55:36.717919009 +0000 UTC m=+0.017529218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:36 compute-0 podman[93935]: 2026-01-31 05:55:36.818308089 +0000 UTC m=+0.117918308 container create 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:36 compute-0 podman[93843]: 2026-01-31 05:55:36.819658416 +0000 UTC m=+0.647050260 container died 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-146047349b2e89190216a14126ba3ee4532939602a8c0b26147f93fafce7cc3f-merged.mount: Deactivated successfully.
Jan 31 05:55:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:37 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3753418879' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 05:55:37 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3753418879' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 05:55:37 compute-0 podman[93843]: 2026-01-31 05:55:37.388626116 +0000 UTC m=+1.216017980 container remove 0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735 (image=quay.io/ceph/ceph:v20, name=trusting_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:55:37 compute-0 sudo[93791]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:37 compute-0 systemd[1]: libpod-conmon-0f58d15d961f8d4101118d07cbf21053b568b188712457bbb793706439a13735.scope: Deactivated successfully.
Jan 31 05:55:37 compute-0 systemd[1]: Started libpod-conmon-814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152.scope.
Jan 31 05:55:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:37 compute-0 podman[93935]: 2026-01-31 05:55:37.87750971 +0000 UTC m=+1.177119949 container init 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:55:37 compute-0 podman[93935]: 2026-01-31 05:55:37.885509552 +0000 UTC m=+1.185119761 container start 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:55:37 compute-0 sudo[93994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgxzrpovkmxmowtdoqrwzivrbuuskspa ; /usr/bin/python3'
Jan 31 05:55:37 compute-0 sudo[93994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:38 compute-0 podman[93935]: 2026-01-31 05:55:37.999692085 +0000 UTC m=+1.299302314 container attach 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:38 compute-0 python3[93996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:38 compute-0 podman[94000]: 2026-01-31 05:55:38.149928459 +0000 UTC m=+0.031339152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:38 compute-0 crazy_brattain[93966]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:55:38 compute-0 crazy_brattain[93966]: --> All data devices are unavailable
Jan 31 05:55:38 compute-0 podman[94000]: 2026-01-31 05:55:38.306379976 +0000 UTC m=+0.187790569 container create 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 05:55:38 compute-0 systemd[1]: libpod-814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152.scope: Deactivated successfully.
Jan 31 05:55:38 compute-0 podman[93935]: 2026-01-31 05:55:38.563582282 +0000 UTC m=+1.863192501 container died 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:38 compute-0 systemd[1]: Started libpod-conmon-8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece.scope.
Jan 31 05:55:38 compute-0 ceph-mon[75251]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22698f6f56ffbd4e95775b45499f8499d099aed024b3a5a1cc37602e8ef8f359/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22698f6f56ffbd4e95775b45499f8499d099aed024b3a5a1cc37602e8ef8f359/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:38 compute-0 podman[94000]: 2026-01-31 05:55:38.913470755 +0000 UTC m=+0.794881378 container init 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:55:38 compute-0 podman[94000]: 2026-01-31 05:55:38.920215022 +0000 UTC m=+0.801625615 container start 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:55:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:39 compute-0 podman[94000]: 2026-01-31 05:55:39.195440809 +0000 UTC m=+1.076851422 container attach 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-65a6945eb2e44fea33c97f70eeae19aa5c4d3942baa29df18d7c53ac0b1835ad-merged.mount: Deactivated successfully.
Jan 31 05:55:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 05:55:39 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3096446979' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:55:39 compute-0 pedantic_mayer[94039]: 
Jan 31 05:55:39 compute-0 pedantic_mayer[94039]: {"fsid":"797ee2fc-ca49-5eee-87c0-542bb035a7d7","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":132,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1769838900,"num_in_osds":3,"osd_in_since":1769838871,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83898368,"bytes_avail":64328028160,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T05:55:33:192010+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T05:54:46.397082+0000","services":{}},"progress_events":{}}
Jan 31 05:55:39 compute-0 systemd[1]: libpod-8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece.scope: Deactivated successfully.
Jan 31 05:55:40 compute-0 podman[93935]: 2026-01-31 05:55:40.0631892 +0000 UTC m=+3.362799439 container remove 814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:55:40 compute-0 systemd[1]: libpod-conmon-814778fcc0157f970e9ea3baf47928d816cb4c91a2622e0cf01f205846f80152.scope: Deactivated successfully.
Jan 31 05:55:40 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3096446979' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:55:40 compute-0 podman[94000]: 2026-01-31 05:55:40.097291187 +0000 UTC m=+1.978701800 container died 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 05:55:40 compute-0 sudo[93818]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:40 compute-0 sudo[94077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:40 compute-0 sudo[94077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:40 compute-0 sudo[94077]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:40 compute-0 sudo[94102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:55:40 compute-0 sudo[94102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-22698f6f56ffbd4e95775b45499f8499d099aed024b3a5a1cc37602e8ef8f359-merged.mount: Deactivated successfully.
Jan 31 05:55:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:41 compute-0 podman[94000]: 2026-01-31 05:55:41.117197456 +0000 UTC m=+2.998608059 container remove 8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece (image=quay.io/ceph/ceph:v20, name=pedantic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:41 compute-0 systemd[1]: libpod-conmon-8038599f137ddf0ab8bc03733b1971477a909db7fe0763d5cdf093750149eece.scope: Deactivated successfully.
Jan 31 05:55:41 compute-0 sudo[93994]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:41 compute-0 ceph-mon[75251]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:41 compute-0 sudo[94178]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmwfivfgttdrepzxoxdrskklbbmcatma ; /usr/bin/python3'
Jan 31 05:55:41 compute-0 sudo[94178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:41 compute-0 podman[94141]: 2026-01-31 05:55:41.215208779 +0000 UTC m=+0.029275594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:41 compute-0 python3[94180]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:41 compute-0 podman[94141]: 2026-01-31 05:55:41.532146337 +0000 UTC m=+0.346213102 container create 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:55:41 compute-0 systemd[1]: Started libpod-conmon-57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d.scope.
Jan 31 05:55:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:41 compute-0 podman[94181]: 2026-01-31 05:55:41.789586729 +0000 UTC m=+0.349993936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:41 compute-0 podman[94181]: 2026-01-31 05:55:41.916524985 +0000 UTC m=+0.476932172 container create 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:55:42 compute-0 systemd[1]: Started libpod-conmon-9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc.scope.
Jan 31 05:55:42 compute-0 podman[94141]: 2026-01-31 05:55:42.02825997 +0000 UTC m=+0.842326785 container init 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25898df7ef2e667b71625832b476be6da8929e2c3d6b0122bd801782175fdd1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25898df7ef2e667b71625832b476be6da8929e2c3d6b0122bd801782175fdd1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 podman[94141]: 2026-01-31 05:55:42.033053703 +0000 UTC m=+0.847120468 container start 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:42 compute-0 frosty_moore[94197]: 167 167
Jan 31 05:55:42 compute-0 systemd[1]: libpod-57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d.scope: Deactivated successfully.
Jan 31 05:55:42 compute-0 podman[94141]: 2026-01-31 05:55:42.051799134 +0000 UTC m=+0.865865909 container attach 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 05:55:42 compute-0 podman[94141]: 2026-01-31 05:55:42.052262777 +0000 UTC m=+0.866329552 container died 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:42 compute-0 podman[94181]: 2026-01-31 05:55:42.156303758 +0000 UTC m=+0.716710975 container init 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:42 compute-0 podman[94181]: 2026-01-31 05:55:42.160264558 +0000 UTC m=+0.720671735 container start 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:55:42 compute-0 podman[94181]: 2026-01-31 05:55:42.18624159 +0000 UTC m=+0.746648777 container attach 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-aabb97413727167837db6976fbafc3bb2a0b4a3ba5cf2ea24cc12495b21f6c57-merged.mount: Deactivated successfully.
Jan 31 05:55:42 compute-0 ceph-mon[75251]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:42 compute-0 podman[94141]: 2026-01-31 05:55:42.378870972 +0000 UTC m=+1.192937757 container remove 57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_moore, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:42 compute-0 systemd[1]: libpod-conmon-57a0736e7170a7d9d1d404c93eaa0979c0c21369bd246cec67626db89175ed7d.scope: Deactivated successfully.
Jan 31 05:55:42 compute-0 podman[94248]: 2026-01-31 05:55:42.465266082 +0000 UTC m=+0.018437323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:42 compute-0 podman[94248]: 2026-01-31 05:55:42.642349933 +0000 UTC m=+0.195521194 container create 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 05:55:42 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243264248' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 05:55:42 compute-0 bold_jones[94202]: 
Jan 31 05:55:42 compute-0 bold_jones[94202]: {"epoch":1,"fsid":"797ee2fc-ca49-5eee-87c0-542bb035a7d7","modified":"2026-01-31T05:53:20.867409Z","created":"2026-01-31T05:53:20.867409Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 31 05:55:42 compute-0 bold_jones[94202]: dumped monmap epoch 1
Jan 31 05:55:42 compute-0 podman[94181]: 2026-01-31 05:55:42.678673722 +0000 UTC m=+1.239080959 container died 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:42 compute-0 systemd[1]: Started libpod-conmon-97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100.scope.
Jan 31 05:55:42 compute-0 systemd[1]: libpod-9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc.scope: Deactivated successfully.
Jan 31 05:55:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9b8654da416ab0540cfb54466a46b474c9666893afa93fb1b889e2cd4aa417/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9b8654da416ab0540cfb54466a46b474c9666893afa93fb1b889e2cd4aa417/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9b8654da416ab0540cfb54466a46b474c9666893afa93fb1b889e2cd4aa417/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c9b8654da416ab0540cfb54466a46b474c9666893afa93fb1b889e2cd4aa417/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:42 compute-0 podman[94248]: 2026-01-31 05:55:42.755481176 +0000 UTC m=+0.308652407 container init 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 05:55:42 compute-0 podman[94248]: 2026-01-31 05:55:42.766081901 +0000 UTC m=+0.319253122 container start 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:55:42 compute-0 podman[94248]: 2026-01-31 05:55:42.786055846 +0000 UTC m=+0.339227107 container attach 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 05:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25898df7ef2e667b71625832b476be6da8929e2c3d6b0122bd801782175fdd1-merged.mount: Deactivated successfully.
Jan 31 05:55:42 compute-0 podman[94181]: 2026-01-31 05:55:42.898795329 +0000 UTC m=+1.459202496 container remove 9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc (image=quay.io/ceph/ceph:v20, name=bold_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 05:55:42 compute-0 systemd[1]: libpod-conmon-9b7ba2e5bdf4ee7b517aec56f183ce4e9c5a8146af0a2c0c15d5b41ffb4724cc.scope: Deactivated successfully.
Jan 31 05:55:42 compute-0 sudo[94178]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:43 compute-0 frosty_burnell[94272]: {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     "0": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "devices": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "/dev/loop3"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             ],
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_name": "ceph_lv0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_size": "21470642176",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "name": "ceph_lv0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "tags": {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.crush_device_class": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.encrypted": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_id": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.vdo": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.with_tpm": "0"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             },
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "vg_name": "ceph_vg0"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         }
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     ],
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     "1": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "devices": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "/dev/loop4"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             ],
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_name": "ceph_lv1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_size": "21470642176",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "name": "ceph_lv1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "tags": {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.crush_device_class": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.encrypted": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_id": "1",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.vdo": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.with_tpm": "0"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             },
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "vg_name": "ceph_vg1"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         }
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     ],
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     "2": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "devices": [
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "/dev/loop5"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             ],
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_name": "ceph_lv2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_size": "21470642176",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "name": "ceph_lv2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "tags": {
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.crush_device_class": "",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.encrypted": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osd_id": "2",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.vdo": "0",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:                 "ceph.with_tpm": "0"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             },
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "type": "block",
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:             "vg_name": "ceph_vg2"
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:         }
Jan 31 05:55:43 compute-0 frosty_burnell[94272]:     ]
Jan 31 05:55:43 compute-0 frosty_burnell[94272]: }
Jan 31 05:55:43 compute-0 systemd[1]: libpod-97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100.scope: Deactivated successfully.
Jan 31 05:55:43 compute-0 conmon[94272]: conmon 97616d52424c505b8ffc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100.scope/container/memory.events
Jan 31 05:55:43 compute-0 podman[94248]: 2026-01-31 05:55:43.046181704 +0000 UTC m=+0.599352925 container died 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c9b8654da416ab0540cfb54466a46b474c9666893afa93fb1b889e2cd4aa417-merged.mount: Deactivated successfully.
Jan 31 05:55:43 compute-0 sudo[94323]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxmhttbsshkgowvajeiyvtpeokvtulm ; /usr/bin/python3'
Jan 31 05:55:43 compute-0 sudo[94323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:43 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3243264248' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 05:55:43 compute-0 python3[94325]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:43 compute-0 podman[94248]: 2026-01-31 05:55:43.577094116 +0000 UTC m=+1.130265337 container remove 97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_burnell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:43 compute-0 systemd[1]: libpod-conmon-97616d52424c505b8ffcb7f14dfa7893b36e1c25b0f916c994ca5b80f86bf100.scope: Deactivated successfully.
Jan 31 05:55:43 compute-0 sudo[94102]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:43 compute-0 sudo[94337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:43 compute-0 sudo[94337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:43 compute-0 sudo[94337]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:43 compute-0 podman[94326]: 2026-01-31 05:55:43.616687626 +0000 UTC m=+0.033237285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:43 compute-0 sudo[94362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:55:43 compute-0 sudo[94362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:43 compute-0 podman[94326]: 2026-01-31 05:55:43.822848654 +0000 UTC m=+0.239398293 container create abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:43 compute-0 systemd[1]: Started libpod-conmon-abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692.scope.
Jan 31 05:55:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47912effad66a5b138869f9ba6f6f0c0422cf48a974e574337d43013761753ea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47912effad66a5b138869f9ba6f6f0c0422cf48a974e574337d43013761753ea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:44 compute-0 podman[94326]: 2026-01-31 05:55:44.056056394 +0000 UTC m=+0.472606063 container init abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:44 compute-0 podman[94326]: 2026-01-31 05:55:44.06164448 +0000 UTC m=+0.478194119 container start abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:43.972617376 +0000 UTC m=+0.029480930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:44 compute-0 podman[94326]: 2026-01-31 05:55:44.130270866 +0000 UTC m=+0.546820545 container attach abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:55:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:55:44
Jan 31 05:55:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:55:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:55:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups']
Jan 31 05:55:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:44.398735056 +0000 UTC m=+0.455598520 container create 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:44 compute-0 systemd[1]: Started libpod-conmon-559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6.scope.
Jan 31 05:55:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 31 05:55:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/333072138' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 05:55:44 compute-0 fervent_lichterman[94408]: [client.openstack]
Jan 31 05:55:44 compute-0 fervent_lichterman[94408]:         key = AQCtmH1pAAAAABAAje//P7iwPlyQKUe9kDxc/g==
Jan 31 05:55:44 compute-0 fervent_lichterman[94408]:         caps mgr = "allow *"
Jan 31 05:55:44 compute-0 fervent_lichterman[94408]:         caps mon = "profile rbd"
Jan 31 05:55:44 compute-0 fervent_lichterman[94408]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 05:55:44 compute-0 systemd[1]: libpod-abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692.scope: Deactivated successfully.
Jan 31 05:55:44 compute-0 ceph-mon[75251]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:44.715276712 +0000 UTC m=+0.772140196 container init 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:44.718926873 +0000 UTC m=+0.775790367 container start 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:44 compute-0 affectionate_kalam[94439]: 167 167
Jan 31 05:55:44 compute-0 systemd[1]: libpod-559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6.scope: Deactivated successfully.
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:44.916917944 +0000 UTC m=+0.973781448 container attach 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:55:44 compute-0 podman[94401]: 2026-01-31 05:55:44.917726837 +0000 UTC m=+0.974590351 container died 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:55:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:55:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:55:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:55:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ecda55576100ef38b9fab62b37fd9856156485b08a71b30dea7d15dfc7f7570-merged.mount: Deactivated successfully.
Jan 31 05:55:45 compute-0 podman[94401]: 2026-01-31 05:55:45.646355421 +0000 UTC m=+1.703218915 container remove 559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:45 compute-0 systemd[1]: libpod-conmon-559d363db2f782fb651f03b268aa6ac643b2ee87f7418de33b9fe419ead509a6.scope: Deactivated successfully.
Jan 31 05:55:45 compute-0 podman[94326]: 2026-01-31 05:55:45.724285276 +0000 UTC m=+2.140834925 container died abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 05:55:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 05:55:45 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/333072138' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 05:55:45 compute-0 ceph-mon[75251]: pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 05:55:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-47912effad66a5b138869f9ba6f6f0c0422cf48a974e574337d43013761753ea-merged.mount: Deactivated successfully.
Jan 31 05:55:46 compute-0 podman[94477]: 2026-01-31 05:55:46.119106027 +0000 UTC m=+0.362330699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:46 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 05:55:46 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 40666c19-416c-4f03-88de-65542557735f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 05:55:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:55:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:46 compute-0 podman[94444]: 2026-01-31 05:55:46.409370992 +0000 UTC m=+1.803175993 container remove abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692 (image=quay.io/ceph/ceph:v20, name=fervent_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 05:55:46 compute-0 systemd[1]: libpod-conmon-abb5f799d2f39a30384102205955386a18ddcc966250b720382d17104ade9692.scope: Deactivated successfully.
Jan 31 05:55:46 compute-0 sudo[94323]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:46 compute-0 podman[94477]: 2026-01-31 05:55:46.498587221 +0000 UTC m=+0.741811863 container create fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:55:46 compute-0 systemd[1]: Started libpod-conmon-fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5.scope.
Jan 31 05:55:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811641ac680ca39ac0c281402864b071f7d9d1cd9ea162e27bcb8e419db01b62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811641ac680ca39ac0c281402864b071f7d9d1cd9ea162e27bcb8e419db01b62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811641ac680ca39ac0c281402864b071f7d9d1cd9ea162e27bcb8e419db01b62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811641ac680ca39ac0c281402864b071f7d9d1cd9ea162e27bcb8e419db01b62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:46 compute-0 podman[94477]: 2026-01-31 05:55:46.974077403 +0000 UTC m=+1.217302125 container init fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:55:46 compute-0 podman[94477]: 2026-01-31 05:55:46.982879218 +0000 UTC m=+1.226103890 container start fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:55:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:47 compute-0 podman[94477]: 2026-01-31 05:55:47.12836878 +0000 UTC m=+1.371593512 container attach fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 05:55:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 05:55:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:47 compute-0 ceph-mon[75251]: osdmap e33: 3 total, 3 up, 3 in
Jan 31 05:55:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 05:55:47 compute-0 sudo[94716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmsbfmcitcnivnltpurnsuqpqrlkkcwp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769838947.3708389-36748-103574688699562/async_wrapper.py j579136958524 30 /home/zuul/.ansible/tmp/ansible-tmp-1769838947.3708389-36748-103574688699562/AnsiballZ_command.py _'
Jan 31 05:55:47 compute-0 sudo[94716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:47 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 05:55:47 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 76232299-bdf9-4ce4-8d84-33b28e84f58a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 05:55:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:55:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:47 compute-0 lvm[94724]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:55:47 compute-0 lvm[94724]: VG ceph_vg0 finished
Jan 31 05:55:47 compute-0 lvm[94727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:55:47 compute-0 lvm[94727]: VG ceph_vg1 finished
Jan 31 05:55:47 compute-0 lvm[94729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:55:47 compute-0 lvm[94729]: VG ceph_vg2 finished
Jan 31 05:55:47 compute-0 ansible-async_wrapper.py[94718]: Invoked with j579136958524 30 /home/zuul/.ansible/tmp/ansible-tmp-1769838947.3708389-36748-103574688699562/AnsiballZ_command.py _
Jan 31 05:55:47 compute-0 ansible-async_wrapper.py[94734]: Starting module and watcher
Jan 31 05:55:47 compute-0 magical_haibt[94498]: {}
Jan 31 05:55:47 compute-0 ansible-async_wrapper.py[94734]: Start watching 94735 (30)
Jan 31 05:55:47 compute-0 ansible-async_wrapper.py[94735]: Start module (94735)
Jan 31 05:55:47 compute-0 ansible-async_wrapper.py[94718]: Return async_wrapper task started.
Jan 31 05:55:47 compute-0 sudo[94716]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:47 compute-0 systemd[1]: libpod-fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5.scope: Deactivated successfully.
Jan 31 05:55:47 compute-0 systemd[1]: libpod-fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5.scope: Consumed 1.091s CPU time.
Jan 31 05:55:47 compute-0 podman[94477]: 2026-01-31 05:55:47.854087685 +0000 UTC m=+2.097312357 container died fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:55:48 compute-0 python3[94736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-811641ac680ca39ac0c281402864b071f7d9d1cd9ea162e27bcb8e419db01b62-merged.mount: Deactivated successfully.
Jan 31 05:55:48 compute-0 podman[94748]: 2026-01-31 05:55:48.163966415 +0000 UTC m=+0.039678023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 05:55:48 compute-0 podman[94477]: 2026-01-31 05:55:48.722100754 +0000 UTC m=+2.965325426 container remove fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 05:55:48 compute-0 sudo[94362]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:48 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 05:55:48 compute-0 sudo[94808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alywbehmjsbfiizxtmkgtxuvvdtazsad ; /usr/bin/python3'
Jan 31 05:55:48 compute-0 sudo[94808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v87: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 05:55:49 compute-0 python3[94810]: ansible-ansible.legacy.async_status Invoked with jid=j579136958524.94718 mode=status _async_dir=/root/.ansible_async
Jan 31 05:55:49 compute-0 sudo[94808]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:49 compute-0 podman[94748]: 2026-01-31 05:55:49.262060807 +0000 UTC m=+1.137772325 container create e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:49 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev ca7f3011-87fc-4afc-b37d-21e55083a02e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:49 compute-0 systemd[1]: libpod-conmon-fc451a8b568c0fc2c4efb8db267ecc92aa56dbfe4c23432fbd417070318029a5.scope: Deactivated successfully.
Jan 31 05:55:49 compute-0 systemd[1]: Started libpod-conmon-e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf.scope.
Jan 31 05:55:49 compute-0 ceph-mon[75251]: pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:49 compute-0 ceph-mon[75251]: osdmap e34: 3 total, 3 up, 3 in
Jan 31 05:55:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=34 pruub=8.385955811s) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active pruub 66.878692627s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=34 pruub=8.385955811s) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown pruub 66.878692627s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.11( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.14( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.16( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.17( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.19( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1a( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1f( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.1e( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.3( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.4( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.6( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.7( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.8( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.9( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.2( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.e( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=17/18 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c5385cf56530618bd884ab931f2daa1bf9beb01bc6782de0a05ab2fe5b2309/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c5385cf56530618bd884ab931f2daa1bf9beb01bc6782de0a05ab2fe5b2309/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:49 compute-0 podman[94748]: 2026-01-31 05:55:49.589313119 +0000 UTC m=+1.465024707 container init e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:49 compute-0 podman[94748]: 2026-01-31 05:55:49.595570543 +0000 UTC m=+1.471282081 container start e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:49 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev fbfd1cd3-4c62-43bb-8b9c-5d10d997839f (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hdercq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hdercq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hdercq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 31 05:55:49 compute-0 podman[94748]: 2026-01-31 05:55:49.716849323 +0000 UTC m=+1.592560831 container attach e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:49 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:49 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.hdercq on compute-0
Jan 31 05:55:49 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.hdercq on compute-0
Jan 31 05:55:49 compute-0 sudo[94837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:49 compute-0 sudo[94837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:49 compute-0 sudo[94837]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:49 compute-0 sudo[94862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:55:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 05:55:49 compute-0 sudo[94862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:50 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:50 compute-0 upbeat_wiles[94814]: 
Jan 31 05:55:50 compute-0 upbeat_wiles[94814]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 05:55:50 compute-0 systemd[1]: libpod-e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf.scope: Deactivated successfully.
Jan 31 05:55:50 compute-0 podman[94748]: 2026-01-31 05:55:50.071183778 +0000 UTC m=+1.946895286 container died e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 05:55:50 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 05:55:50 compute-0 sudo[94958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shpnyvyzagkrqhgnulnweqlyjinbshqm ; /usr/bin/python3'
Jan 31 05:55:50 compute-0 sudo[94958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:50 compute-0 ceph-mgr[75550]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Jan 31 05:55:50 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 5999b0a3-8440-468f-866d-428af2eca2b2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 05:55:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 31 05:55:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 python3[94960]: ansible-ansible.legacy.async_status Invoked with jid=j579136958524.94718 mode=status _async_dir=/root/.ansible_async
Jan 31 05:55:50 compute-0 sudo[94958]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=34/36 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=17/17 les/c/f=18/18/0 sis=34) [2] r=0 lpr=34 pi=[17,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c5385cf56530618bd884ab931f2daa1bf9beb01bc6782de0a05ab2fe5b2309-merged.mount: Deactivated successfully.
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: pgmap v87: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:50 compute-0 ceph-mon[75251]: osdmap e35: 3 total, 3 up, 3 in
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hdercq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hdercq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:50 compute-0 ceph-mon[75251]: osdmap e36: 3 total, 3 up, 3 in
Jan 31 05:55:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 05:55:50 compute-0 podman[94901]: 2026-01-31 05:55:50.905144531 +0000 UTC m=+0.820204902 container remove e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf (image=quay.io/ceph/ceph:v20, name=upbeat_wiles, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:50 compute-0 systemd[1]: libpod-conmon-e7e4cfe977e3c83145d511bd004713a1e256feca1ec4f407150d49a2ceb8c4bf.scope: Deactivated successfully.
Jan 31 05:55:50 compute-0 ansible-async_wrapper.py[94735]: Module complete (94735)
Jan 31 05:55:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v90: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.096488907 +0000 UTC m=+0.032784312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.260416132 +0000 UTC m=+0.196711477 container create 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 05:55:51 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 7d7a6f82-49f2-4f7e-80ae-415ba22a1bd3 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 05:55:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:55:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=11.305858612s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 71.764038086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=11.305858612s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 71.764038086s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 31 05:55:51 compute-0 systemd[1]: Started libpod-conmon-42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8.scope.
Jan 31 05:55:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 31 05:55:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.439476357 +0000 UTC m=+0.375771682 container init 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.444379903 +0000 UTC m=+0.380675248 container start 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:55:51 compute-0 magical_bouman[95010]: 167 167
Jan 31 05:55:51 compute-0 systemd[1]: libpod-42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8.scope: Deactivated successfully.
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.499087464 +0000 UTC m=+0.435382799 container attach 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.499839024 +0000 UTC m=+0.436134329 container died 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 05:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-966cca860573589a8dbc4a8f9925cd55da0454e7b303ee118c62dd541b2420aa-merged.mount: Deactivated successfully.
Jan 31 05:55:51 compute-0 podman[94993]: 2026-01-31 05:55:51.56373849 +0000 UTC m=+0.500033795 container remove 42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:55:51 compute-0 systemd[1]: libpod-conmon-42958121ce950ad41587cadc8f8f05d2484ef04369138557b5eec91af22275e8.scope: Deactivated successfully.
Jan 31 05:55:51 compute-0 systemd[1]: Reloading.
Jan 31 05:55:51 compute-0 systemd-rc-local-generator[95099]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:55:51 compute-0 systemd-sysv-generator[95105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:55:51 compute-0 ceph-mon[75251]: Deploying daemon rgw.rgw.compute-0.hdercq on compute-0
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:51 compute-0 ceph-mon[75251]: pgmap v90: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:51 compute-0 ceph-mon[75251]: osdmap e37: 3 total, 3 up, 3 in
Jan 31 05:55:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:55:51 compute-0 sudo[95077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lablitmnqpklujbutjovorbuocxdgpbg ; /usr/bin/python3'
Jan 31 05:55:51 compute-0 sudo[95077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:51 compute-0 systemd[1]: Reloading.
Jan 31 05:55:51 compute-0 systemd-rc-local-generator[95140]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:55:51 compute-0 systemd-sysv-generator[95146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:55:51 compute-0 python3[95114]: ansible-ansible.legacy.async_status Invoked with jid=j579136958524.94718 mode=status _async_dir=/root/.ansible_async
Jan 31 05:55:51 compute-0 sudo[95077]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:52 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.hdercq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:55:52 compute-0 sudo[95199]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnfbghsbjvpbsjrpqxpkajvbqmupnby ; /usr/bin/python3'
Jan 31 05:55:52 compute-0 sudo[95199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:52 compute-0 python3[95204]: ansible-ansible.legacy.async_status Invoked with jid=j579136958524.94718 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 05:55:52 compute-0 sudo[95199]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 7b8ddec7-3b7f-4dee-a225-599a2bd091a9 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 40666c19-416c-4f03-88de-65542557735f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 40666c19-416c-4f03-88de-65542557735f (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 76232299-bdf9-4ce4-8d84-33b28e84f58a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 76232299-bdf9-4ce4-8d84-33b28e84f58a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev ca7f3011-87fc-4afc-b37d-21e55083a02e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event ca7f3011-87fc-4afc-b37d-21e55083a02e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 5999b0a3-8440-468f-866d-428af2eca2b2 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 5999b0a3-8440-468f-866d-428af2eca2b2 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 7d7a6f82-49f2-4f7e-80ae-415ba22a1bd3 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 7d7a6f82-49f2-4f7e-80ae-415ba22a1bd3 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 7b8ddec7-3b7f-4dee-a225-599a2bd091a9 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 7b8ddec7-3b7f-4dee-a225-599a2bd091a9 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 31 05:55:52 compute-0 podman[95251]: 2026-01-31 05:55:52.294053622 +0000 UTC m=+0.036179536 container create 3c764da63a2ad557395a733336de77396c03897461096f2a716abe77d5853fd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-rgw-rgw-compute-0-hdercq, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=36 pruub=13.449251175s) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active pruub 81.752014160s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=36 pruub=13.449251175s) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown pruub 81.752014160s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.8( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=37/38 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=17/18 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fa7a631e20b96f00b03a6f21690eaa5c922448210bef507cc2cb180f7a089c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fa7a631e20b96f00b03a6f21690eaa5c922448210bef507cc2cb180f7a089c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fa7a631e20b96f00b03a6f21690eaa5c922448210bef507cc2cb180f7a089c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fa7a631e20b96f00b03a6f21690eaa5c922448210bef507cc2cb180f7a089c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.hdercq supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 podman[95251]: 2026-01-31 05:55:52.356206459 +0000 UTC m=+0.098332393 container init 3c764da63a2ad557395a733336de77396c03897461096f2a716abe77d5853fd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-rgw-rgw-compute-0-hdercq, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:55:52 compute-0 podman[95251]: 2026-01-31 05:55:52.361740143 +0000 UTC m=+0.103866047 container start 3c764da63a2ad557395a733336de77396c03897461096f2a716abe77d5853fd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-rgw-rgw-compute-0-hdercq, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 05:55:52 compute-0 bash[95251]: 3c764da63a2ad557395a733336de77396c03897461096f2a716abe77d5853fd6
Jan 31 05:55:52 compute-0 podman[95251]: 2026-01-31 05:55:52.274790487 +0000 UTC m=+0.016916401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:52 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.hdercq for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:55:52 compute-0 radosgw[95270]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:55:52 compute-0 radosgw[95270]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 31 05:55:52 compute-0 radosgw[95270]: framework: beast
Jan 31 05:55:52 compute-0 radosgw[95270]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 05:55:52 compute-0 radosgw[95270]: init_numa not setting numa affinity
Jan 31 05:55:52 compute-0 sudo[94862]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev fbfd1cd3-4c62-43bb-8b9c-5d10d997839f (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event fbfd1cd3-4c62-43bb-8b9c-5d10d997839f (Updating rgw.rgw deployment (+1 -> 1)) in 3 seconds
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev cb896cf5-30e6-40af-afe2-8c84aecd487a (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.olydew", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.olydew", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.olydew", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 05:55:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.olydew on compute-0
Jan 31 05:55:52 compute-0 ceph-mgr[75550]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.olydew on compute-0
Jan 31 05:55:52 compute-0 sudo[95299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:52 compute-0 sudo[95299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:52 compute-0 sudo[95299]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:52 compute-0 sudo[95324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7
Jan 31 05:55:52 compute-0 sudo[95324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:52 compute-0 sudo[95372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phbrjhqvouqynqtrkdprzxwbginnpusm ; /usr/bin/python3'
Jan 31 05:55:52 compute-0 sudo[95372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:52 compute-0 python3[95374]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:52 compute-0 podman[95382]: 2026-01-31 05:55:52.801496191 +0000 UTC m=+0.031845326 container create f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:55:52 compute-0 ceph-mon[75251]: 2.1d scrub starts
Jan 31 05:55:52 compute-0 ceph-mon[75251]: 2.1d scrub ok
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:55:52 compute-0 ceph-mon[75251]: osdmap e38: 3 total, 3 up, 3 in
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: Saving service rgw.rgw spec with placement compute-0
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.olydew", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.olydew", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 05:55:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:52 compute-0 ceph-mon[75251]: Deploying daemon mds.cephfs.compute-0.olydew on compute-0
Jan 31 05:55:52 compute-0 ansible-async_wrapper.py[94734]: Done in kid B.
Jan 31 05:55:52 compute-0 systemd[1]: Started libpod-conmon-f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8.scope.
Jan 31 05:55:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7296a571a10ddd9473a3c225065ddcc2323e4c12de845dfb6053f2b0dcf304d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7296a571a10ddd9473a3c225065ddcc2323e4c12de845dfb6053f2b0dcf304d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:52 compute-0 podman[95382]: 2026-01-31 05:55:52.875393734 +0000 UTC m=+0.105742909 container init f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:52 compute-0 podman[95382]: 2026-01-31 05:55:52.882173313 +0000 UTC m=+0.112522458 container start f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:52 compute-0 podman[95382]: 2026-01-31 05:55:52.786536075 +0000 UTC m=+0.016885230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:52 compute-0 podman[95382]: 2026-01-31 05:55:52.888609252 +0000 UTC m=+0.118958397 container attach f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:55:52 compute-0 podman[95429]: 2026-01-31 05:55:52.939226958 +0000 UTC m=+0.041939326 container create c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:52 compute-0 systemd[1]: Started libpod-conmon-c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15.scope.
Jan 31 05:55:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:52 compute-0 podman[95429]: 2026-01-31 05:55:52.999539064 +0000 UTC m=+0.102251442 container init c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:55:53 compute-0 podman[95429]: 2026-01-31 05:55:53.003570806 +0000 UTC m=+0.106283164 container start c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:53 compute-0 magical_kapitsa[95451]: 167 167
Jan 31 05:55:53 compute-0 systemd[1]: libpod-c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15.scope: Deactivated successfully.
Jan 31 05:55:53 compute-0 conmon[95451]: conmon c9049759f9d899451d1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15.scope/container/memory.events
Jan 31 05:55:53 compute-0 podman[95429]: 2026-01-31 05:55:53.007107754 +0000 UTC m=+0.109820112 container attach c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:55:53 compute-0 podman[95429]: 2026-01-31 05:55:53.007552657 +0000 UTC m=+0.110265025 container died c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 05:55:53 compute-0 podman[95429]: 2026-01-31 05:55:52.916195618 +0000 UTC m=+0.018907996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2afc0fbd654080affa8e5fd515e8c7c6b4a94d249f4d4ee7728551eea7d5aca1-merged.mount: Deactivated successfully.
Jan 31 05:55:53 compute-0 podman[95429]: 2026-01-31 05:55:53.051923789 +0000 UTC m=+0.154636147 container remove c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_kapitsa, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:53 compute-0 systemd[1]: libpod-conmon-c9049759f9d899451d1ade2246b8fded13d787b999acae694f031aed255e4b15.scope: Deactivated successfully.
Jan 31 05:55:53 compute-0 systemd[1]: Reloading.
Jan 31 05:55:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v93: 131 pgs: 33 peering, 62 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 05:55:53 compute-0 systemd-rc-local-generator[95505]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:55:53 compute-0 systemd-sysv-generator[95508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:55:53 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:53 compute-0 naughty_chaum[95415]: 
Jan 31 05:55:53 compute-0 naughty_chaum[95415]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 05:55:53 compute-0 podman[95382]: 2026-01-31 05:55:53.284511232 +0000 UTC m=+0.514860367 container died f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 05:55:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 05:55:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 05:55:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=13.394701958s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 82.701606750s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3593282635' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=13.394701958s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 82.701606750s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=36/39 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=17/17 les/c/f=18/18/0 sis=36) [1] r=0 lpr=36 pi=[17,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:53 compute-0 systemd[1]: libpod-f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8.scope: Deactivated successfully.
Jan 31 05:55:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7296a571a10ddd9473a3c225065ddcc2323e4c12de845dfb6053f2b0dcf304d-merged.mount: Deactivated successfully.
Jan 31 05:55:53 compute-0 podman[95382]: 2026-01-31 05:55:53.353153109 +0000 UTC m=+0.583502244 container remove f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8 (image=quay.io/ceph/ceph:v20, name=naughty_chaum, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:55:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 31 05:55:53 compute-0 systemd[1]: libpod-conmon-f80a65c4bfc76c26bf7bdf63f14871f92cba5e631e7102754498130ef2ccafe8.scope: Deactivated successfully.
Jan 31 05:55:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 31 05:55:53 compute-0 sudo[95372]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:53 compute-0 systemd[1]: Reloading.
Jan 31 05:55:53 compute-0 systemd-rc-local-generator[95564]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 05:55:53 compute-0 systemd-sysv-generator[95568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 05:55:53 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.olydew for 797ee2fc-ca49-5eee-87c0-542bb035a7d7...
Jan 31 05:55:53 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 05:55:53 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 05:55:53 compute-0 ceph-mon[75251]: 2.1c scrub starts
Jan 31 05:55:53 compute-0 ceph-mon[75251]: 2.1c scrub ok
Jan 31 05:55:53 compute-0 ceph-mon[75251]: pgmap v93: 131 pgs: 33 peering, 62 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 05:55:53 compute-0 ceph-mon[75251]: osdmap e39: 3 total, 3 up, 3 in
Jan 31 05:55:53 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3593282635' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 05:55:53 compute-0 podman[95624]: 2026-01-31 05:55:53.913251122 +0000 UTC m=+0.052773587 container create 9fc0f4643169725c8471f8359fd336b94cfc6f64cd25ffb399179e9eccec4a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mds-cephfs-compute-0-olydew, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 05:55:53 compute-0 sudo[95661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihprsoanzrclppqfemutamlyvufxhtvd ; /usr/bin/python3'
Jan 31 05:55:53 compute-0 sudo[95661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f49454fb68f1e11d1c076bd3e690efa98f886e91d37d56c37173a858476718b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f49454fb68f1e11d1c076bd3e690efa98f886e91d37d56c37173a858476718b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f49454fb68f1e11d1c076bd3e690efa98f886e91d37d56c37173a858476718b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f49454fb68f1e11d1c076bd3e690efa98f886e91d37d56c37173a858476718b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.olydew supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:53 compute-0 podman[95624]: 2026-01-31 05:55:53.884489553 +0000 UTC m=+0.024012078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:53 compute-0 podman[95624]: 2026-01-31 05:55:53.984355468 +0000 UTC m=+0.123878003 container init 9fc0f4643169725c8471f8359fd336b94cfc6f64cd25ffb399179e9eccec4a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mds-cephfs-compute-0-olydew, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:55:53 compute-0 podman[95624]: 2026-01-31 05:55:53.992037861 +0000 UTC m=+0.131560326 container start 9fc0f4643169725c8471f8359fd336b94cfc6f64cd25ffb399179e9eccec4a9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mds-cephfs-compute-0-olydew, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 31 05:55:53 compute-0 bash[95624]: 9fc0f4643169725c8471f8359fd336b94cfc6f64cd25ffb399179e9eccec4a9f
Jan 31 05:55:54 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.olydew for 797ee2fc-ca49-5eee-87c0-542bb035a7d7.
Jan 31 05:55:54 compute-0 ceph-mds[95670]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:55:54 compute-0 ceph-mds[95670]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 31 05:55:54 compute-0 ceph-mds[95670]: main not setting numa affinity
Jan 31 05:55:54 compute-0 ceph-mds[95670]: pidfile_write: ignore empty --pid-file
Jan 31 05:55:54 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mds-cephfs-compute-0-olydew[95666]: starting mds.cephfs.compute-0.olydew at 
Jan 31 05:55:54 compute-0 sudo[95324]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:54 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew Updating MDS map to version 2 from mon.0
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev cb896cf5-30e6-40af-afe2-8c84aecd487a (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 05:55:54 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event cb896cf5-30e6-40af-afe2-8c84aecd487a (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 python3[95665]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.545930862s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 85.210700989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=15.289635658s) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active pruub 89.954421997s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=10.545930862s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 85.210700989s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37 pruub=15.289635658s) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown pruub 89.954421997s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.16( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.19( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1d( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.15( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.17( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.6( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 39 pg[4.3( empty local-lis/les=19/20 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 sudo[95689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:55:54 compute-0 sudo[95689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:54 compute-0 sudo[95689]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.194299971 +0000 UTC m=+0.051707188 container create 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:54 compute-0 sudo[95726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:54 compute-0 sudo[95726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:54 compute-0 sudo[95726]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:54 compute-0 systemd[1]: Started libpod-conmon-9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950.scope.
Jan 31 05:55:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.173674818 +0000 UTC m=+0.031082065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980b16ebf87e6b996f3d919a4ecf4c0c91ba77ce187a73c2cb7a2fbc2e424838/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980b16ebf87e6b996f3d919a4ecf4c0c91ba77ce187a73c2cb7a2fbc2e424838/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:54 compute-0 sudo[95754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:55:54 compute-0 sudo[95754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.285544987 +0000 UTC m=+0.142952224 container init 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.291540213 +0000 UTC m=+0.148947430 container start 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.295198185 +0000 UTC m=+0.152605472 container attach 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3593282635' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1d( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1e( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.b( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.6( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.19( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.3( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=37/40 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.16( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.17( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=19/19 les/c/f=20/20/0 sis=37) [0] r=0 lpr=37 pi=[19,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:54 compute-0 podman[96410]: 2026-01-31 05:55:54.745736883 +0000 UTC m=+0.067405124 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 05:55:54 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} v 0)
Jan 31 05:55:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 05:55:54 compute-0 nervous_varahamihira[95762]: 
Jan 31 05:55:54 compute-0 nervous_varahamihira[95762]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 31 05:55:54 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 05:55:54 compute-0 systemd[1]: libpod-9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950.scope: Deactivated successfully.
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.789009956 +0000 UTC m=+0.646417203 container died 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:54 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 05:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-980b16ebf87e6b996f3d919a4ecf4c0c91ba77ce187a73c2cb7a2fbc2e424838-merged.mount: Deactivated successfully.
Jan 31 05:55:54 compute-0 podman[96410]: 2026-01-31 05:55:54.856952464 +0000 UTC m=+0.178620615 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:55:54 compute-0 systemd[76640]: Starting Mark boot as successful...
Jan 31 05:55:54 compute-0 systemd[76640]: Finished Mark boot as successful.
Jan 31 05:55:54 compute-0 ceph-mon[75251]: 2.a scrub starts
Jan 31 05:55:54 compute-0 ceph-mon[75251]: 2.a scrub ok
Jan 31 05:55:54 compute-0 ceph-mon[75251]: 3.17 scrub starts
Jan 31 05:55:54 compute-0 ceph-mon[75251]: 3.17 scrub ok
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3593282635' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 05:55:54 compute-0 ceph-mon[75251]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 05:55:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 05:55:54 compute-0 podman[95712]: 2026-01-31 05:55:54.953428634 +0000 UTC m=+0.810835881 container remove 9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950 (image=quay.io/ceph/ceph:v20, name=nervous_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 05:55:54 compute-0 systemd[1]: libpod-conmon-9ad743878ef5413364cac7fef84d2057bf996174db89c9f768b6f1b7490ed950.scope: Deactivated successfully.
Jan 31 05:55:54 compute-0 sudo[95661]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e3 new map
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-31T05:55:55:047449+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T05:55:33.191774+0000
                                           modified        2026-01-31T05:55:33.191774+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.olydew{-1:14255} state up:standby seq 1 addr [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] compat {c=[1],r=[1],i=[1fff]}]
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew Updating MDS map to version 3 from mon.0
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew Monitors have assigned me to become a standby
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] up:boot
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] as mds.0
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.olydew assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.olydew"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.olydew"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e4 new map
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-31T05:55:55:054852+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T05:55:33.191774+0000
                                           modified        2026-01-31T05:55:55.054841+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14255}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.olydew{0:14255} state up:creating seq 1 addr [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.olydew=up:creating}
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew Updating MDS map to version 4 from mon.0
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x1
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x100
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x600
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x601
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x602
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x603
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x604
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x605
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x606
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x607
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x608
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.cache creating system inode with ino:0x609
Jan 31 05:55:55 compute-0 ceph-mds[95670]: mds.0.4 creating_done
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.olydew is now active in filesystem cephfs as rank 0
Jan 31 05:55:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v96: 178 pgs: 33 peering, 109 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 11 completed events
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:55 compute-0 sudo[95754]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:55:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:55 compute-0 sudo[96617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:55 compute-0 sudo[96617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:55 compute-0 sudo[96617]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:55 compute-0 sudo[96642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:55:55 compute-0 sudo[96642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:55 compute-0 sudo[96690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frlvhafvzfovfmjoszpvkyxtzefktvxr ; /usr/bin/python3'
Jan 31 05:55:55 compute-0 sudo[96690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:55 compute-0 python3[96692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:55 compute-0 podman[96695]: 2026-01-31 05:55:55.942241699 +0000 UTC m=+0.048908460 container create ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:55 compute-0 systemd[1]: Started libpod-conmon-ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874.scope.
Jan 31 05:55:55 compute-0 podman[96718]: 2026-01-31 05:55:55.980941545 +0000 UTC m=+0.047203953 container create 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 31 05:55:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c754a266025d34defae999c61376187766393813690c7eecd4aee612838970ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c754a266025d34defae999c61376187766393813690c7eecd4aee612838970ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:55.912720639 +0000 UTC m=+0.019387450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:56 compute-0 systemd[1]: Started libpod-conmon-615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963.scope.
Jan 31 05:55:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:56.02683581 +0000 UTC m=+0.133502631 container init ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:56.033980188 +0000 UTC m=+0.100242636 container init 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:56.035408188 +0000 UTC m=+0.142074929 container start ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:56.039293976 +0000 UTC m=+0.145960737 container attach ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:56.040968062 +0000 UTC m=+0.107230470 container start 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:55:56 compute-0 lucid_hermann[96741]: 167 167
Jan 31 05:55:56 compute-0 systemd[1]: libpod-615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963.scope: Deactivated successfully.
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:56.04736989 +0000 UTC m=+0.113632288 container attach 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:56.047854274 +0000 UTC m=+0.114116672 container died 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:55.965005792 +0000 UTC m=+0.031268200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: 3.18 scrub starts
Jan 31 05:55:56 compute-0 ceph-mon[75251]: 3.18 scrub ok
Jan 31 05:55:56 compute-0 ceph-mon[75251]: mds.? [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] up:boot
Jan 31 05:55:56 compute-0 ceph-mon[75251]: daemon mds.cephfs.compute-0.olydew assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 05:55:56 compute-0 ceph-mon[75251]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 05:55:56 compute-0 ceph-mon[75251]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 05:55:56 compute-0 ceph-mon[75251]: Cluster is now healthy
Jan 31 05:55:56 compute-0 ceph-mon[75251]: fsmap cephfs:0 1 up:standby
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.olydew"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: fsmap cephfs:1 {0=cephfs.compute-0.olydew=up:creating}
Jan 31 05:55:56 compute-0 ceph-mon[75251]: daemon mds.cephfs.compute-0.olydew is now active in filesystem cephfs as rank 0
Jan 31 05:55:56 compute-0 ceph-mon[75251]: pgmap v96: 178 pgs: 33 peering, 109 unknown, 36 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:55:56 compute-0 ceph-mon[75251]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:55:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e5 new map
Jan 31 05:55:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-31T05:55:56:060604+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T05:55:33.191774+0000
                                           modified        2026-01-31T05:55:56.060602+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14255}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14255 members: 14255
                                           [mds.cephfs.compute-0.olydew{0:14255} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 05:55:56 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew Updating MDS map to version 5 from mon.0
Jan 31 05:55:56 compute-0 ceph-mds[95670]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 05:55:56 compute-0 ceph-mds[95670]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 05:55:56 compute-0 ceph-mds[95670]: mds.0.4 recovery_done -- successful recovery!
Jan 31 05:55:56 compute-0 ceph-mds[95670]: mds.0.4 active_start
Jan 31 05:55:56 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] up:active
Jan 31 05:55:56 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.olydew=up:active}
Jan 31 05:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-68381b0aea459de8e5c6de75c2cb329fc865b157aeb5985ba839d0b075cd5fa3-merged.mount: Deactivated successfully.
Jan 31 05:55:56 compute-0 podman[96718]: 2026-01-31 05:55:56.101784742 +0000 UTC m=+0.168047140 container remove 615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 05:55:56 compute-0 systemd[1]: libpod-conmon-615e702dafd3f7acd8e2fe7d1d9c0de2f430cca129e622c3353c10ee55340963.scope: Deactivated successfully.
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.209441524 +0000 UTC m=+0.039672663 container create 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:55:56 compute-0 systemd[1]: Started libpod-conmon-7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278.scope.
Jan 31 05:55:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.190784726 +0000 UTC m=+0.021015905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.324552511 +0000 UTC m=+0.154783810 container init 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:56 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 05:55:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.330566408 +0000 UTC m=+0.160797587 container start 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:56 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.38318066 +0000 UTC m=+0.213411849 container attach 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:55:56 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:56 compute-0 pedantic_shamir[96736]: 
Jan 31 05:55:56 compute-0 pedantic_shamir[96736]: [{"container_id": "20fded41c264", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.15%", "created": "2026-01-31T05:54:16.414064Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T05:54:16.458606Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.513961Z", "memory_usage": 7786725, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T05:54:16.306720Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@crash.compute-0", "version": "20.2.0"}, {"container_id": "9fc0f4643169", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.04%", "created": "2026-01-31T05:55:54.005062Z", "daemon_id": "cephfs.compute-0.olydew", "daemon_name": "mds.cephfs.compute-0.olydew", "daemon_type": "mds", "events": ["2026-01-31T05:55:54.059827Z daemon:mds.cephfs.compute-0.olydew [INFO] \"Deployed mds.cephfs.compute-0.olydew on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.514941Z", "memory_usage": 15330181, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-31T05:55:53.892877Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mds.cephfs.compute-0.olydew", "version": "20.2.0"}, {"container_id": "f894eac92541", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "13.29%", "created": "2026-01-31T05:53:28.551544Z", "daemon_id": "compute-0.vavqfa", "daemon_name": "mgr.compute-0.vavqfa", "daemon_type": "mgr", "events": ["2026-01-31T05:54:20.466775Z daemon:mgr.compute-0.vavqfa [INFO] \"Reconfigured mgr.compute-0.vavqfa on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.513724Z", "memory_usage": 548405248, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T05:53:28.483786Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mgr.compute-0.vavqfa", "version": "20.2.0"}, {"container_id": "57c5f39a8765", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.34%", "created": "2026-01-31T05:53:24.674436Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T05:54:19.928368Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.513470Z", "memory_request": 2147483648, "memory_usage": 41450209, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T05:53:26.745614Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@mon.compute-0", "version": "20.2.0"}, {"container_id": "fcb48056ec9a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.27%", "created": "2026-01-31T05:54:39.095857Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T05:54:39.288325Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.514194Z", "memory_request": 4294967296, "memory_usage": 59926118, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T05:54:38.888149Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@osd.0", "version": "20.2.0"}, {"container_id": "992c3eed0bdd", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.43%", "created": "2026-01-31T05:54:43.343006Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T05:54:43.767595Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.514371Z", "memory_request": 4294967296, "memory_usage": 62075699, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T05:54:42.906958Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@osd.1", "version": "20.2.0"}, {"container_id": "050c75d570ab", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.49%", "created": "2026-01-31T05:54:50.219247Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T05:54:50.327861Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T05:55:55.514555Z", "memory_request": 4294967296, "memory_usage": 62023270, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T05:54:50.077039Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@osd.2", "version": "20.2.0"}, {"container_id": "3c764da63a2a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "4.62%", "created": "2026-01-31T05:55:52.374017Z", "daemon_id": "rgw.compute-0.hdercq", "daemon_name": "rgw.rgw.compute-0.hdercq", "daemon_type": "rgw", "events": ["2026-01-31T05:55:52.438039Z daemon:rgw.rgw.compute-0.hdercq [INFO] \"Deployed rgw.rgw.compute-0.hdercq on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-01-31T05:55:55.514726Z", "memory_usage": 53466890, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-31T05:55:52.278938Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7@rgw.rgw.compute-0.hdercq", "version": "20.2.0"}]
Jan 31 05:55:56 compute-0 systemd[1]: libpod-ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874.scope: Deactivated successfully.
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:56.459661505 +0000 UTC m=+0.566328226 container died ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:56 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 05:55:56 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 05:55:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c754a266025d34defae999c61376187766393813690c7eecd4aee612838970ab-merged.mount: Deactivated successfully.
Jan 31 05:55:56 compute-0 rsyslogd[1004]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "20fded41c264", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 05:55:56 compute-0 infallible_black[96809]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:55:56 compute-0 infallible_black[96809]: --> All data devices are unavailable
Jan 31 05:55:56 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 05:55:56 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 05:55:56 compute-0 systemd[1]: libpod-7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278.scope: Deactivated successfully.
Jan 31 05:55:56 compute-0 podman[96695]: 2026-01-31 05:55:56.849328733 +0000 UTC m=+0.955995504 container remove ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874 (image=quay.io/ceph/ceph:v20, name=pedantic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 05:55:56 compute-0 podman[96792]: 2026-01-31 05:55:56.86541898 +0000 UTC m=+0.695650159 container died 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 05:55:56 compute-0 systemd[1]: libpod-conmon-ecf23b5bc05d00373250a71e6c686fd9e8d04bd660cda8579846e21623518874.scope: Deactivated successfully.
Jan 31 05:55:56 compute-0 sudo[96690]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-905532ae9f08d9f0d7bfb24e65a587a122bc13fb8cc153749a4f096be77803f8-merged.mount: Deactivated successfully.
Jan 31 05:55:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v99: 179 pgs: 32 peering, 33 unknown, 114 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Jan 31 05:55:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:55:57 compute-0 ceph-mon[75251]: mds.? [v2:192.168.122.100:6814/1545676168,v1:192.168.122.100:6815/1545676168] up:active
Jan 31 05:55:57 compute-0 ceph-mon[75251]: fsmap cephfs:1 {0=cephfs.compute-0.olydew=up:active}
Jan 31 05:55:57 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 05:55:57 compute-0 ceph-mon[75251]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 05:55:57 compute-0 ceph-mon[75251]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 05:55:57 compute-0 ceph-mon[75251]: 4.1d scrub starts
Jan 31 05:55:57 compute-0 ceph-mon[75251]: 4.1d scrub ok
Jan 31 05:55:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 05:55:57 compute-0 podman[96845]: 2026-01-31 05:55:57.3296899 +0000 UTC m=+0.499969943 container remove 7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:57 compute-0 systemd[1]: libpod-conmon-7366d3d2b64a76673652341d84b9915afebac4736faadbabddb1c08892dc4278.scope: Deactivated successfully.
Jan 31 05:55:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 05:55:57 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 05:55:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 31 05:55:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 05:55:57 compute-0 sudo[96642]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:57 compute-0 sudo[96859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:57 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 05:55:57 compute-0 sudo[96859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:57 compute-0 sudo[96859]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:57 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 05:55:57 compute-0 sudo[96884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:55:57 compute-0 sudo[96884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:57 compute-0 sudo[96932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoukyluiyqfefnzzjsxjyscptjulsjbl ; /usr/bin/python3'
Jan 31 05:55:57 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:57 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 43 pg[10.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:57 compute-0 podman[96947]: 2026-01-31 05:55:57.840503583 +0000 UTC m=+0.102611522 container create df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:57 compute-0 podman[96947]: 2026-01-31 05:55:57.76662969 +0000 UTC m=+0.028737659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:57 compute-0 python3[96936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:57 compute-0 systemd[1]: Started libpod-conmon-df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b.scope.
Jan 31 05:55:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:57.913709417 +0000 UTC m=+0.033889293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:55:58 compute-0 podman[96947]: 2026-01-31 05:55:58.048448891 +0000 UTC m=+0.310556910 container init df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:58 compute-0 podman[96947]: 2026-01-31 05:55:58.057729279 +0000 UTC m=+0.319837238 container start df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 05:55:58 compute-0 happy_sanderson[96972]: 167 167
Jan 31 05:55:58 compute-0 systemd[1]: libpod-df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 podman[96947]: 2026-01-31 05:55:58.104015185 +0000 UTC m=+0.366123174 container attach df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:55:58 compute-0 podman[96947]: 2026-01-31 05:55:58.105516327 +0000 UTC m=+0.367624296 container died df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-15b88a39e9288ff3dc5f56ad62a90ef2439a2968d5782ce44ca27ac466c5e09c-merged.mount: Deactivated successfully.
Jan 31 05:55:58 compute-0 podman[96947]: 2026-01-31 05:55:58.183989647 +0000 UTC m=+0.446097586 container remove df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 31 05:55:58 compute-0 systemd[1]: libpod-conmon-df17656e5c20db4c3673aa3fc687bb20c42cf40053768179497667fca07f346b.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 ceph-mon[75251]: 3.16 scrub starts
Jan 31 05:55:58 compute-0 ceph-mon[75251]: 3.16 scrub ok
Jan 31 05:55:58 compute-0 ceph-mon[75251]: pgmap v99: 179 pgs: 32 peering, 33 unknown, 114 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Jan 31 05:55:58 compute-0 ceph-mon[75251]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 05:55:58 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 05:55:58 compute-0 ceph-mon[75251]: 4.1f scrub starts
Jan 31 05:55:58 compute-0 ceph-mon[75251]: 4.1f scrub ok
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.270345297 +0000 UTC m=+0.390525083 container create 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:55:58 compute-0 systemd[1]: Started libpod-conmon-644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a.scope.
Jan 31 05:55:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca71346e8589b2a0496812d4425546598e428594560feb6b1a54c7da4342d7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca71346e8589b2a0496812d4425546598e428594560feb6b1a54c7da4342d7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.309843304 +0000 UTC m=+0.054855595 container create 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.33128992 +0000 UTC m=+0.451469726 container init 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.336197886 +0000 UTC m=+0.456377672 container start 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.339806877 +0000 UTC m=+0.459986683 container attach 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 05:55:58 compute-0 systemd[1]: Started libpod-conmon-64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6.scope.
Jan 31 05:55:58 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 05:55:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 05:55:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d681e521f97213f2132464ef374649802ad465c59f42b9c04561e2c515807825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d681e521f97213f2132464ef374649802ad465c59f42b9c04561e2c515807825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d681e521f97213f2132464ef374649802ad465c59f42b9c04561e2c515807825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d681e521f97213f2132464ef374649802ad465c59f42b9c04561e2c515807825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:58 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.374266834 +0000 UTC m=+0.119279185 container init 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.280448327 +0000 UTC m=+0.025460658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:58 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 44 pg[10.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [2] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.378891243 +0000 UTC m=+0.123903534 container start 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.387607005 +0000 UTC m=+0.132619296 container attach 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 05:55:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 05:55:58 compute-0 zealous_black[97023]: {
Jan 31 05:55:58 compute-0 zealous_black[97023]:     "0": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:         {
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "devices": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "/dev/loop3"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             ],
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_name": "ceph_lv0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_size": "21470642176",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "name": "ceph_lv0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "tags": {
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.crush_device_class": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.encrypted": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_id": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.vdo": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.with_tpm": "0"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             },
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "vg_name": "ceph_vg0"
Jan 31 05:55:58 compute-0 zealous_black[97023]:         }
Jan 31 05:55:58 compute-0 zealous_black[97023]:     ],
Jan 31 05:55:58 compute-0 zealous_black[97023]:     "1": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:         {
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "devices": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "/dev/loop4"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             ],
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_name": "ceph_lv1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_size": "21470642176",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "name": "ceph_lv1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "tags": {
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.crush_device_class": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.encrypted": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_id": "1",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.vdo": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.with_tpm": "0"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             },
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "vg_name": "ceph_vg1"
Jan 31 05:55:58 compute-0 zealous_black[97023]:         }
Jan 31 05:55:58 compute-0 zealous_black[97023]:     ],
Jan 31 05:55:58 compute-0 zealous_black[97023]:     "2": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:         {
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "devices": [
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "/dev/loop5"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             ],
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_name": "ceph_lv2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_size": "21470642176",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "name": "ceph_lv2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "tags": {
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.cluster_name": "ceph",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.crush_device_class": "",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.encrypted": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.objectstore": "bluestore",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osd_id": "2",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.vdo": "0",
Jan 31 05:55:58 compute-0 zealous_black[97023]:                 "ceph.with_tpm": "0"
Jan 31 05:55:58 compute-0 zealous_black[97023]:             },
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "type": "block",
Jan 31 05:55:58 compute-0 zealous_black[97023]:             "vg_name": "ceph_vg2"
Jan 31 05:55:58 compute-0 zealous_black[97023]:         }
Jan 31 05:55:58 compute-0 zealous_black[97023]:     ]
Jan 31 05:55:58 compute-0 zealous_black[97023]: }
Jan 31 05:55:58 compute-0 systemd[1]: libpod-64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 conmon[97023]: conmon 64c7cb2889828845b30e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6.scope/container/memory.events
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.682927221 +0000 UTC m=+0.427939512 container died 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 05:55:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 05:55:58 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580178359' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:55:58 compute-0 hardcore_blackburn[97015]: 
Jan 31 05:55:58 compute-0 hardcore_blackburn[97015]: {"fsid":"797ee2fc-ca49-5eee-87c0-542bb035a7d7","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":151,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1769838900,"num_in_osds":3,"osd_in_since":1769838871,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":114},{"state_name":"unknown","count":33},{"state_name":"peering","count":32}],"num_pgs":179,"num_pools":9,"num_objects":23,"data_bytes":461710,"bytes_used":84234240,"bytes_avail":64327692288,"bytes_total":64411926528,"unknown_pgs_ratio":0.18435753881931305,"inactive_pgs_ratio":0.17877094447612762,"write_bytes_sec":3583,"read_op_per_sec":0,"write_op_per_sec":10},"fsmap":{"epoch":5,"btime":"2026-01-31T05:55:56:060604+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.olydew","status":"up:active","gid":14255}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T05:55:53.103264+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"c5e63847-718b-4828-91ba-fef2f2decb60":{"message":"Global Recovery Event (5s)\n      [=====.......................] (remaining: 19s)","progress":0.20338982343673706,"add_to_ceph_s":true}}}
Jan 31 05:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d681e521f97213f2132464ef374649802ad465c59f42b9c04561e2c515807825-merged.mount: Deactivated successfully.
Jan 31 05:55:58 compute-0 systemd[1]: libpod-644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 conmon[97015]: conmon 644ca0c53f7a39f638e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a.scope/container/memory.events
Jan 31 05:55:58 compute-0 podman[96999]: 2026-01-31 05:55:58.838195275 +0000 UTC m=+0.583207606 container remove 64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_black, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:55:58 compute-0 systemd[1]: libpod-conmon-64c7cb2889828845b30ed1ef6bb2dcb735fcc6d42a6984a72426898fd61ca5f6.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.845763035 +0000 UTC m=+0.965942821 container died 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca71346e8589b2a0496812d4425546598e428594560feb6b1a54c7da4342d7f-merged.mount: Deactivated successfully.
Jan 31 05:55:58 compute-0 podman[96961]: 2026-01-31 05:55:58.899884879 +0000 UTC m=+1.020064685 container remove 644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a (image=quay.io/ceph/ceph:v20, name=hardcore_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:55:58 compute-0 sudo[96884]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:58 compute-0 systemd[1]: libpod-conmon-644ca0c53f7a39f638e29d381ca708ea480c898cbc100ee1c2e52f7ddf16f78a.scope: Deactivated successfully.
Jan 31 05:55:58 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:58 compute-0 sudo[97080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:55:58 compute-0 sudo[97080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:58 compute-0 sudo[97080]: pam_unix(sudo:session): session closed for user root
Jan 31 05:55:59 compute-0 sudo[97105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:55:59 compute-0 sudo[97105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:55:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v102: 180 pgs: 1 unknown, 179 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 31 05:55:59 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 05:55:59 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.306157558 +0000 UTC m=+0.048584991 container create 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:59 compute-0 systemd[1]: Started libpod-conmon-3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c.scope.
Jan 31 05:55:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 05:55:59 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 05:55:59 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 05:55:59 compute-0 ceph-mon[75251]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 05:55:59 compute-0 ceph-mon[75251]: 4.1e scrub starts
Jan 31 05:55:59 compute-0 ceph-mon[75251]: 4.1e scrub ok
Jan 31 05:55:59 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2580178359' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.371772851 +0000 UTC m=+0.114200334 container init 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:55:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.281598775 +0000 UTC m=+0.024026258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:59 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 05:55:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 31 05:55:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.381701837 +0000 UTC m=+0.124129260 container start 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:55:59 compute-0 stupefied_ishizaka[97159]: 167 167
Jan 31 05:55:59 compute-0 systemd[1]: libpod-3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c.scope: Deactivated successfully.
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.385601525 +0000 UTC m=+0.128028948 container attach 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.386313865 +0000 UTC m=+0.128741288 container died 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecbb659ccad820ceac4563936669d4cf637ce4e4f5f73202b2d7a72f0287a5ac-merged.mount: Deactivated successfully.
Jan 31 05:55:59 compute-0 podman[97142]: 2026-01-31 05:55:59.428886518 +0000 UTC m=+0.171313901 container remove 3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:55:59 compute-0 systemd[1]: libpod-conmon-3273d57f82b1af8016c8bcf3be22125fda4f29e60115f284b9035dc7c1cfd00c.scope: Deactivated successfully.
Jan 31 05:55:59 compute-0 podman[97184]: 2026-01-31 05:55:59.578005751 +0000 UTC m=+0.035266441 container create 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:55:59 compute-0 systemd[1]: Started libpod-conmon-5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97.scope.
Jan 31 05:55:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f78214ddb1515719470106a906de14f6e0be3baf274c31299765546ec91261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f78214ddb1515719470106a906de14f6e0be3baf274c31299765546ec91261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f78214ddb1515719470106a906de14f6e0be3baf274c31299765546ec91261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f78214ddb1515719470106a906de14f6e0be3baf274c31299765546ec91261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:55:59 compute-0 podman[97184]: 2026-01-31 05:55:59.560272488 +0000 UTC m=+0.017533188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:55:59 compute-0 podman[97184]: 2026-01-31 05:55:59.668066784 +0000 UTC m=+0.125327504 container init 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:55:59 compute-0 podman[97184]: 2026-01-31 05:55:59.684292425 +0000 UTC m=+0.141553105 container start 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 05:55:59 compute-0 podman[97184]: 2026-01-31 05:55:59.690993981 +0000 UTC m=+0.148254701 container attach 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:55:59 compute-0 sudo[97228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekjuicjdgwabpnhzyxythvhwxuwyknd ; /usr/bin/python3'
Jan 31 05:55:59 compute-0 sudo[97228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:55:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 45 pg[11.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:55:59 compute-0 python3[97230]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:55:59 compute-0 podman[97232]: 2026-01-31 05:55:59.952604229 +0000 UTC m=+0.057702325 container create 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:55:59 compute-0 systemd[1]: Started libpod-conmon-0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde.scope.
Jan 31 05:56:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87358ef37c9cf542e39c44f0c86af956c5fd5414658a6a8c7db9527d4638e16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e87358ef37c9cf542e39c44f0c86af956c5fd5414658a6a8c7db9527d4638e16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:55:59.928741476 +0000 UTC m=+0.033839682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:56:00.0249869 +0000 UTC m=+0.130085026 container init 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:56:00.029097774 +0000 UTC m=+0.134195880 container start 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:56:00.032001745 +0000 UTC m=+0.137099851 container attach 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 05:56:00 compute-0 ceph-mds[95670]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 05:56:00 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mds-cephfs-compute-0-olydew[95666]: 2026-01-31T05:56:00.067+0000 7f7757309640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 05:56:00 compute-0 lvm[97340]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:56:00 compute-0 lvm[97340]: VG ceph_vg0 finished
Jan 31 05:56:00 compute-0 lvm[97343]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:56:00 compute-0 lvm[97343]: VG ceph_vg1 finished
Jan 31 05:56:00 compute-0 lvm[97345]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:56:00 compute-0 lvm[97345]: VG ceph_vg2 finished
Jan 31 05:56:00 compute-0 lvm[97346]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:56:00 compute-0 lvm[97346]: VG ceph_vg1 finished
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 05:56:00 compute-0 ceph-mon[75251]: pgmap v102: 180 pgs: 1 unknown, 179 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 31 05:56:00 compute-0 ceph-mon[75251]: 2.9 scrub starts
Jan 31 05:56:00 compute-0 ceph-mon[75251]: 2.9 scrub ok
Jan 31 05:56:00 compute-0 ceph-mon[75251]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 05:56:00 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 05:56:00 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:00 compute-0 vigilant_shannon[97200]: {}
Jan 31 05:56:00 compute-0 systemd[1]: libpod-5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97.scope: Deactivated successfully.
Jan 31 05:56:00 compute-0 podman[97184]: 2026-01-31 05:56:00.438551161 +0000 UTC m=+0.895811841 container died 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:56:00 compute-0 systemd[1]: libpod-5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97.scope: Consumed 1.050s CPU time.
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3195650604' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:56:00 compute-0 suspicious_yalow[97258]: 
Jan 31 05:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f78214ddb1515719470106a906de14f6e0be3baf274c31299765546ec91261-merged.mount: Deactivated successfully.
Jan 31 05:56:00 compute-0 systemd[1]: libpod-0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde.scope: Deactivated successfully.
Jan 31 05:56:00 compute-0 suspicious_yalow[97258]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.hdercq","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 05:56:00 compute-0 podman[97184]: 2026-01-31 05:56:00.481487094 +0000 UTC m=+0.938747774 container remove 5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shannon, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:56:00.48530917 +0000 UTC m=+0.590407306 container died 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:56:00 compute-0 systemd[1]: libpod-conmon-5423f716d86e9bea2f1ec4223f6c86246973ead7e9ac7827afa6227679ae0f97.scope: Deactivated successfully.
Jan 31 05:56:00 compute-0 sudo[97105]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e87358ef37c9cf542e39c44f0c86af956c5fd5414658a6a8c7db9527d4638e16-merged.mount: Deactivated successfully.
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:00 compute-0 podman[97232]: 2026-01-31 05:56:00.552041375 +0000 UTC m=+0.657139491 container remove 0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde (image=quay.io/ceph/ceph:v20, name=suspicious_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:56:00 compute-0 systemd[1]: libpod-conmon-0d7408988103705185812651edbd84aa5f5bcef17fb78921f942ed0bb14eefde.scope: Deactivated successfully.
Jan 31 05:56:00 compute-0 sudo[97228]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:00 compute-0 sudo[97376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:56:00 compute-0 sudo[97376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:00 compute-0 sudo[97376]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:56:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:00 compute-0 sudo[97401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:56:00 compute-0 sudo[97401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:00 compute-0 sudo[97401]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:00 compute-0 sudo[97426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 05:56:00 compute-0 sudo[97426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:00 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 05:56:00 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 05:56:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v105: 181 pgs: 2 unknown, 179 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Jan 31 05:56:01 compute-0 podman[97493]: 2026-01-31 05:56:01.184697319 +0000 UTC m=+0.069629981 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:56:01 compute-0 podman[97493]: 2026-01-31 05:56:01.292521145 +0000 UTC m=+0.177453767 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:56:01 compute-0 sudo[97542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xecumyjmxlqnfyithrpfrhlgivftqxch ; /usr/bin/python3'
Jan 31 05:56:01 compute-0 sudo[97542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:56:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 05:56:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 05:56:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 05:56:01 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 05:56:01 compute-0 ceph-mon[75251]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3195650604' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:01 compute-0 ceph-mon[75251]: 3.15 scrub starts
Jan 31 05:56:01 compute-0 ceph-mon[75251]: 3.15 scrub ok
Jan 31 05:56:01 compute-0 python3[97553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:56:01 compute-0 podman[97600]: 2026-01-31 05:56:01.568688808 +0000 UTC m=+0.054955538 container create 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:01 compute-0 systemd[1]: Started libpod-conmon-563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb.scope.
Jan 31 05:56:01 compute-0 podman[97600]: 2026-01-31 05:56:01.536236353 +0000 UTC m=+0.022503073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:56:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95008a78292490d65ecb7889cf1df5c5fe2334bf9ae93ae4635b06a4240d5d32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95008a78292490d65ecb7889cf1df5c5fe2334bf9ae93ae4635b06a4240d5d32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:01 compute-0 podman[97600]: 2026-01-31 05:56:01.675047524 +0000 UTC m=+0.161314304 container init 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:01 compute-0 podman[97600]: 2026-01-31 05:56:01.681446861 +0000 UTC m=+0.167713631 container start 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:01 compute-0 podman[97600]: 2026-01-31 05:56:01.698161492 +0000 UTC m=+0.184428232 container attach 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:56:01 compute-0 radosgw[95270]: v1 topic migration: starting v1 topic migration..
Jan 31 05:56:01 compute-0 radosgw[95270]: v1 topic migration: finished v1 topic migration
Jan 31 05:56:01 compute-0 radosgw[95270]: framework: beast
Jan 31 05:56:01 compute-0 radosgw[95270]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 05:56:01 compute-0 radosgw[95270]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 05:56:01 compute-0 radosgw[95270]: starting handler: beast
Jan 31 05:56:01 compute-0 radosgw[95270]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 05:56:01 compute-0 radosgw[95270]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.hdercq,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864288,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=aac2167c-08e6-41c1-84df-31c57053db7e,zone_name=default,zonegroup_id=c8600002-707f-4ec2-a8e0-de9d564db04d,zonegroup_name=default}
Jan 31 05:56:02 compute-0 sudo[97426]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540029783' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 05:56:02 compute-0 busy_swartz[97649]: mimic
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:56:02 compute-0 podman[97600]: 2026-01-31 05:56:02.112575752 +0000 UTC m=+0.598842442 container died 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:02 compute-0 systemd[1]: libpod-563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb.scope: Deactivated successfully.
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-95008a78292490d65ecb7889cf1df5c5fe2334bf9ae93ae4635b06a4240d5d32-merged.mount: Deactivated successfully.
Jan 31 05:56:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:02 compute-0 podman[97600]: 2026-01-31 05:56:02.163001874 +0000 UTC m=+0.649268564 container remove 563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb (image=quay.io/ceph/ceph:v20, name=busy_swartz, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:02 compute-0 systemd[1]: libpod-conmon-563e6483e83495ebbfbe9adc932902b8fa022a17591a57703c081c3031bf08cb.scope: Deactivated successfully.
Jan 31 05:56:02 compute-0 sudo[97789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:56:02 compute-0 sudo[97789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:02 compute-0 sudo[97789]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:02 compute-0 sudo[97542]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:02 compute-0 sudo[97820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:56:02 compute-0 sudo[97820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:02 compute-0 ceph-mon[75251]: pgmap v105: 181 pgs: 2 unknown, 179 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1370847722' entity='client.rgw.rgw.compute-0.hdercq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 05:56:02 compute-0 ceph-mon[75251]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3540029783' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:56:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.483272634 +0000 UTC m=+0.045545848 container create cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:02 compute-0 systemd[1]: Started libpod-conmon-cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3.scope.
Jan 31 05:56:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.459834137 +0000 UTC m=+0.022107381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.562076999 +0000 UTC m=+0.124350193 container init cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.566617435 +0000 UTC m=+0.128890659 container start cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:56:02 compute-0 beautiful_carver[97873]: 167 167
Jan 31 05:56:02 compute-0 systemd[1]: libpod-cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3.scope: Deactivated successfully.
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.575779988 +0000 UTC m=+0.138053212 container attach cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:02 compute-0 podman[97857]: 2026-01-31 05:56:02.576272781 +0000 UTC m=+0.138545975 container died cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:56:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 05:56:02 compute-0 sudo[97914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsraacjtyhdleavpimbugaeujxoyuvo ; /usr/bin/python3'
Jan 31 05:56:02 compute-0 sudo[97914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:56:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v107: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 31 05:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d775056311aa305c0aefa1aa683b928fbbf42d6989d204a3511dbff4d07aa02-merged.mount: Deactivated successfully.
Jan 31 05:56:03 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 05:56:03 compute-0 ceph-mon[75251]: 3.19 scrub starts
Jan 31 05:56:03 compute-0 podman[97857]: 2026-01-31 05:56:03.451289164 +0000 UTC m=+1.013562358 container remove cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:03 compute-0 systemd[1]: libpod-conmon-cd0b3e8ba69a86811acc3ab10f601a1b4ab4988042f459ce25d8dadbd078d0d3.scope: Deactivated successfully.
Jan 31 05:56:03 compute-0 python3[97916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:56:03 compute-0 podman[97924]: 2026-01-31 05:56:03.562088212 +0000 UTC m=+0.048665324 container create f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:56:03 compute-0 systemd[1]: Started libpod-conmon-f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb.scope.
Jan 31 05:56:03 compute-0 podman[97924]: 2026-01-31 05:56:03.532949218 +0000 UTC m=+0.019526350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 podman[97938]: 2026-01-31 05:56:03.66309588 +0000 UTC m=+0.103055035 container create 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:56:03 compute-0 podman[97924]: 2026-01-31 05:56:03.691741621 +0000 UTC m=+0.178318733 container init f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:56:03 compute-0 podman[97924]: 2026-01-31 05:56:03.702485097 +0000 UTC m=+0.189062249 container start f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 05:56:03 compute-0 podman[97938]: 2026-01-31 05:56:03.614572591 +0000 UTC m=+0.054531826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:56:03 compute-0 podman[97924]: 2026-01-31 05:56:03.712778892 +0000 UTC m=+0.199356044 container attach f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:56:03 compute-0 systemd[1]: Started libpod-conmon-2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e.scope.
Jan 31 05:56:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2482c42cfad4d24cec4cb985004b7dfb8d7378d8664a40e1e1afb8a55722162/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2482c42cfad4d24cec4cb985004b7dfb8d7378d8664a40e1e1afb8a55722162/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:03 compute-0 podman[97938]: 2026-01-31 05:56:03.779205555 +0000 UTC m=+0.219164730 container init 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:56:03 compute-0 podman[97938]: 2026-01-31 05:56:03.783208636 +0000 UTC m=+0.223167821 container start 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:03 compute-0 podman[97938]: 2026-01-31 05:56:03.805251544 +0000 UTC m=+0.245210689 container attach 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:04 compute-0 elastic_perlman[97953]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:56:04 compute-0 elastic_perlman[97953]: --> All data devices are unavailable
Jan 31 05:56:04 compute-0 podman[97924]: 2026-01-31 05:56:04.162161326 +0000 UTC m=+0.648738458 container died f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:56:04 compute-0 systemd[1]: libpod-f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb.scope: Deactivated successfully.
Jan 31 05:56:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 31 05:56:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432809534' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 05:56:04 compute-0 compassionate_babbage[97960]: 
Jan 31 05:56:04 compute-0 systemd[1]: libpod-2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e.scope: Deactivated successfully.
Jan 31 05:56:04 compute-0 compassionate_babbage[97960]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 31 05:56:04 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 05:56:04 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 05:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb06bd85de4ef40132466a744ce604e324273e5a7c7b8092255f654c2673d67-merged.mount: Deactivated successfully.
Jan 31 05:56:04 compute-0 podman[97924]: 2026-01-31 05:56:04.457015535 +0000 UTC m=+0.943592687 container remove f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 05:56:04 compute-0 systemd[1]: libpod-conmon-f32f723b4a10e4b47c3eca98ba4a38e95c9a928eb3646fa5cc3ad0a2529436fb.scope: Deactivated successfully.
Jan 31 05:56:04 compute-0 podman[97938]: 2026-01-31 05:56:04.489481901 +0000 UTC m=+0.929441056 container died 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:56:04 compute-0 ceph-mon[75251]: pgmap v107: 181 pgs: 1 unknown, 180 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 31 05:56:04 compute-0 ceph-mon[75251]: 3.19 scrub ok
Jan 31 05:56:04 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1432809534' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 05:56:04 compute-0 sudo[97820]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:04 compute-0 sudo[98026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:56:04 compute-0 sudo[98026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:04 compute-0 sudo[98026]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:04 compute-0 sudo[98051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:56:04 compute-0 sudo[98051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2482c42cfad4d24cec4cb985004b7dfb8d7378d8664a40e1e1afb8a55722162-merged.mount: Deactivated successfully.
Jan 31 05:56:04 compute-0 podman[98013]: 2026-01-31 05:56:04.757243502 +0000 UTC m=+0.484605017 container remove 2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e (image=quay.io/ceph/ceph:v20, name=compassionate_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:04 compute-0 systemd[1]: libpod-conmon-2e62f25d519c873697b023ea4ff76b36faf04aedf930845517cf8e222e956c7e.scope: Deactivated successfully.
Jan 31 05:56:04 compute-0 sudo[97914]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:04 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 05:56:04 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 05:56:04 compute-0 podman[98089]: 2026-01-31 05:56:04.972465303 +0000 UTC m=+0.069444258 container create 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:56:05 compute-0 systemd[1]: Started libpod-conmon-50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac.scope.
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:04.939066051 +0000 UTC m=+0.036045066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:05.057932302 +0000 UTC m=+0.154911257 container init 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:05.069702387 +0000 UTC m=+0.166681312 container start 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:05.074066898 +0000 UTC m=+0.171045823 container attach 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 05:56:05 compute-0 gracious_aryabhata[98105]: 167 167
Jan 31 05:56:05 compute-0 systemd[1]: libpod-50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac.scope: Deactivated successfully.
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:05.07885477 +0000 UTC m=+0.175833695 container died 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:56:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v108: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 11 KiB/s wr, 399 op/s
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-da6775a694d5cd844cd88195e3d3c5d2d589549b970cb4c61db7e8cf9bfde79b-merged.mount: Deactivated successfully.
Jan 31 05:56:05 compute-0 podman[98089]: 2026-01-31 05:56:05.183696554 +0000 UTC m=+0.280675489 container remove 50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_aryabhata, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:05 compute-0 systemd[1]: libpod-conmon-50470439ad318b5a461b130ee5370e61045a2a6488ca3943ef479b69486329ac.scope: Deactivated successfully.
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.317448776 +0000 UTC m=+0.044513910 container create 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 05:56:05 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event c5e63847-718b-4828-91ba-fef2f2decb60 (Global Recovery Event) in 15 seconds
Jan 31 05:56:05 compute-0 systemd[1]: Started libpod-conmon-60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d.scope.
Jan 31 05:56:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.296546699 +0000 UTC m=+0.023611853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65a3bc6eb9b1f1b4834da41811f170cd6600d53f1df50f46c572793f7a4228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65a3bc6eb9b1f1b4834da41811f170cd6600d53f1df50f46c572793f7a4228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65a3bc6eb9b1f1b4834da41811f170cd6600d53f1df50f46c572793f7a4228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb65a3bc6eb9b1f1b4834da41811f170cd6600d53f1df50f46c572793f7a4228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.439535136 +0000 UTC m=+0.166600310 container init 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.44799844 +0000 UTC m=+0.175063574 container start 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.453756188 +0000 UTC m=+0.180821312 container attach 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 05:56:05 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.800793648s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433731079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.800745010s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433731079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122439384s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755470276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122337341s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755386353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122387886s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755470276s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.18( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122304916s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755386353s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799903870s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433059692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122113228s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755355835s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.17( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122090340s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755355835s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121955872s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755332947s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799689293s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433059692s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122048378s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755455017s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.15( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.122031212s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755455017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.16( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121916771s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755332947s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799319267s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.432853699s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799300194s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.432853699s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799451828s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433128357s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799437523s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433128357s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799259186s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433059692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799293518s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433135986s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799285889s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433143616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799268723s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433143616s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799189568s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433059692s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799265862s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433135986s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121336937s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755378723s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.11( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121321678s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755378723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799057961s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433181763s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799020767s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433181763s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799299240s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433547974s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799285889s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433547974s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121062279s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755340576s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121564865s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755325317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.121034622s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755340576s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799324036s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433830261s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.799294472s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433830261s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.13( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120951653s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755325317s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120716095s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755325317s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120363235s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755088806s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.2( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120347977s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755088806s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120271683s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755134583s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119976997s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754920959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.7( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120210648s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755134583s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798876762s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433807373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.3( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119960785s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754920959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.120504379s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755325317s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798834801s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433807373s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798753738s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433845520s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798738480s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433845520s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798618317s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.434074402s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798385620s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433837891s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119726181s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755187988s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798600197s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.434074402s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798350334s) [0] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433837891s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.4( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119693756s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755187988s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119325638s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754882812s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119271278s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754920959s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.5( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119283676s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754882812s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.6( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.119254112s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754920959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798067093s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.433845520s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.798036575s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.433845520s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.803041458s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.438880920s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118915558s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754753113s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.803025246s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.438880920s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.8( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118883133s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754753113s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118696213s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754653931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118680000s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754737854s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.9( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118632317s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754653931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.b( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118662834s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754737854s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802736282s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.438873291s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=8.944723129s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.580970764s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1c( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=8.944623947s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.580970764s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802440643s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.438934326s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802540779s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.438873291s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802374840s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.438934326s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118818283s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755378723s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=8.944358826s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.580932617s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118189812s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.754638672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1d( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=8.944301605s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.580932617s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802179337s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.438888550s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.a( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.117986679s) [1] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.754638672s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.802164078s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.438888550s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118324280s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 active pruub 83.755088806s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.1f( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118295670s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755088806s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.801934242s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 85.438919067s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=37/38 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=10.801916122s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 85.438919067s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[2.19( empty local-lis/les=34/36 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48 pruub=9.118177414s) [0] r=-1 lpr=48 pi=[34,48)/1 crt=0'0 unknown NOTIFY pruub 83.755378723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.806041718s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.316459656s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.805079460s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.316459656s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803436279s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.314849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.17( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803381920s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.314849854s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803214073s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.314888000s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.16( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803139687s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.314888000s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803123474s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.314949036s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.15( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.803097725s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.314949036s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802927017s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.314842224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.805058479s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.316970825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.804978371s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.316970825s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.18( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802860260s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.314842224s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802370071s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315025330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.12( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802336693s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315025330s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802424431s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315193176s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802395821s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315193176s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802042007s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315048218s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.802007675s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315048218s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.803932190s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.317161560s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.804681778s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.317916870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.804660797s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.317916870s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.803900719s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.317161560s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.803631783s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.317062378s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.803609848s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.317062378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.801548004s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315025330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.801477432s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315093994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.18( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.11( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.801283836s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315025330s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.c( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.801413536s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315093994s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.803669930s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.317291260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.802873611s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.317291260s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.16( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.e( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.1b( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.11( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.17( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-mon[75251]: 2.8 scrub starts
Jan 31 05:56:05 compute-0 ceph-mon[75251]: 2.8 scrub ok
Jan 31 05:56:05 compute-0 ceph-mon[75251]: 3.13 scrub starts
Jan 31 05:56:05 compute-0 ceph-mon[75251]: 3.13 scrub ok
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.15( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.19( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.18( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.f( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.797822952s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318626404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.797794342s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318626404s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.c( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.1d( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.1c( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.792607307s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.316627502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.f( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.792487144s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.316627502s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.2( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.1f( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.793453217s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318031311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.792181015s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318031311s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.b( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.789014816s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315437317s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.3( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788969994s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315437317s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.791526794s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318092346s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.8( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.16( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.17( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.15( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.13( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.791278839s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318092346s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.792234421s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.319160461s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.792210579s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.319160461s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788415909s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315490723s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.791485786s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318618774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.6( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788383484s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315490723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.791462898s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318618774s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788161278s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315467834s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.7( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788138390s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315467834s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790742874s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318122864s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790719032s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318122864s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787964821s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315483093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.8( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787937164s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315483093s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790954590s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318527222s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790933609s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318527222s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787736893s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315505981s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.9( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787706375s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315505981s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787701607s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315574646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.a( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787675858s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315574646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790717125s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318763733s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790686607s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318763733s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790534973s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318801880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.790503502s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318801880s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787277222s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315605164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1b( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.787245750s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315605164s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.789325714s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318809509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.789297104s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318809509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.788572311s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315391541s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.5( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.785785675s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315391541s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.784984589s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315361023s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.785181046s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315605164s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.784946442s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315361023s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1d( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.785148621s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315605164s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.787522316s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318046570s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.787494659s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318046570s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.788090706s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318885803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.784790993s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315620422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.788064957s) [2] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318885803s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1e( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.784766197s) [2] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315620422s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.12( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[2.11( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.787840843s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 94.318847656s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.786651611s) [0] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 94.318847656s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.783401489s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 active pruub 93.315628052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[3.1f( empty local-lis/les=36/39 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48 pruub=11.783365250s) [0] r=-1 lpr=48 pi=[36,48)/1 crt=0'0 unknown NOTIFY pruub 93.315628052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.783786774s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851516724s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.7( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.783742905s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851516724s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.783323288s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851486206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.783279419s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851486206s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.8( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782996178s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851455688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782952309s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851455688s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782670021s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851455688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782629013s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851455688s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782299042s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851394653s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.782211304s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851394653s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.781963348s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851387024s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.781930923s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851387024s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.781692505s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851371765s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.5( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.781666756s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851371765s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.3( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.7( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780674934s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851196289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780650139s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851196289s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.d( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780573845s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851173401s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.780754089s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'3 lcod 40'2 active pruub 98.851379395s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780545235s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851173401s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=39/40 n=3 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.780461311s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'4 lcod 40'4 active pruub 98.851142883s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.780697823s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'3 lcod 40'2 unknown NOTIFY pruub 98.851379395s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=39/40 n=3 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.780353546s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'4 lcod 40'4 unknown NOTIFY pruub 98.851142883s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780125618s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851043701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780143738s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.851081848s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780108452s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851043701s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.780119896s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.851081848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.3( v 40'2 (0'0,40'2] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.780077934s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'2 lcod 40'1 active pruub 98.851066589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.779912949s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 98.850997925s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.779896736s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 98.850997925s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779659271s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850883484s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779644966s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850883484s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.779596329s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'1 lcod 40'2 active pruub 98.850868225s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.779552460s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'1 lcod 40'2 unknown NOTIFY pruub 98.850868225s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779419899s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850822449s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779401779s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850822449s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779218674s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850738525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.779205322s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850738525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.7( v 40'2 (0'0,40'2] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778979301s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'2 lcod 40'1 active pruub 98.850646973s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.7( v 40'2 (0'0,40'2] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778946877s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'2 lcod 40'1 unknown NOTIFY pruub 98.850646973s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778876305s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850616455s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778822899s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850624084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778820038s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850616455s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778803825s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850624084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778730392s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850753784s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.5( v 40'3 (0'0,40'3] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778353691s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'3 lcod 40'2 active pruub 98.850418091s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778705597s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850753784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778394699s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 active pruub 98.850471497s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778373718s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=0'0 unknown NOTIFY pruub 98.850471497s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.5( v 40'3 (0'0,40'3] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778304100s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'3 lcod 40'2 unknown NOTIFY pruub 98.850418091s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.777996063s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850227356s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.777973175s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850227356s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778056145s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850326538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.777961731s) [2] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778246880s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 active pruub 98.850898743s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=37/40 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48 pruub=12.778201103s) [1] r=-1 lpr=48 pi=[37,48)/1 crt=0'0 unknown NOTIFY pruub 98.850898743s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[6.3( v 40'2 (0'0,40'2] local-lis/les=39/40 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48 pruub=12.778072357s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=40'2 lcod 40'1 unknown NOTIFY pruub 98.851066589s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.3( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.6( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.9( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.a( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.4( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.5( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.1b( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.6( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.9( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.1( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[2.a( empty local-lis/les=0/0 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 48 pg[3.1f( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.1d( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 48 pg[3.1e( empty local-lis/les=0/0 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 48 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]: {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     "0": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "devices": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "/dev/loop3"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             ],
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_name": "ceph_lv0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_size": "21470642176",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "name": "ceph_lv0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "tags": {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_name": "ceph",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.crush_device_class": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.encrypted": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.objectstore": "bluestore",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_id": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.vdo": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.with_tpm": "0"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             },
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "vg_name": "ceph_vg0"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         }
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     ],
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     "1": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "devices": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "/dev/loop4"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             ],
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_name": "ceph_lv1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_size": "21470642176",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "name": "ceph_lv1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "tags": {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_name": "ceph",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.crush_device_class": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.encrypted": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.objectstore": "bluestore",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_id": "1",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.vdo": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.with_tpm": "0"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             },
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "vg_name": "ceph_vg1"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         }
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     ],
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     "2": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "devices": [
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "/dev/loop5"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             ],
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_name": "ceph_lv2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_size": "21470642176",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "name": "ceph_lv2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "tags": {
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.cluster_name": "ceph",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.crush_device_class": "",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.encrypted": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.objectstore": "bluestore",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osd_id": "2",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.vdo": "0",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:                 "ceph.with_tpm": "0"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             },
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "type": "block",
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:             "vg_name": "ceph_vg2"
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:         }
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]:     ]
Jan 31 05:56:05 compute-0 thirsty_thompson[98149]: }
Jan 31 05:56:05 compute-0 systemd[1]: libpod-60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d.scope: Deactivated successfully.
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.773292639 +0000 UTC m=+0.500357733 container died 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 05:56:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb65a3bc6eb9b1f1b4834da41811f170cd6600d53f1df50f46c572793f7a4228-merged.mount: Deactivated successfully.
Jan 31 05:56:05 compute-0 podman[98132]: 2026-01-31 05:56:05.829229183 +0000 UTC m=+0.556294267 container remove 60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_thompson, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:05 compute-0 systemd[1]: libpod-conmon-60883ce5f1b60e6ff7082429672bc80946aa0d9c72e1774f44edf2df0b64b43d.scope: Deactivated successfully.
Jan 31 05:56:05 compute-0 sudo[98051]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:05 compute-0 sudo[98171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:56:05 compute-0 sudo[98171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:05 compute-0 sudo[98171]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:05 compute-0 sudo[98196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:56:05 compute-0 sudo[98196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.216517033 +0000 UTC m=+0.053537749 container create b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:56:06 compute-0 systemd[1]: Started libpod-conmon-b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736.scope.
Jan 31 05:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.198204598 +0000 UTC m=+0.035225364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.295979147 +0000 UTC m=+0.132999863 container init b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.300234534 +0000 UTC m=+0.137255250 container start b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.302882667 +0000 UTC m=+0.139903383 container attach b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:56:06 compute-0 magical_mirzakhani[98249]: 167 167
Jan 31 05:56:06 compute-0 systemd[1]: libpod-b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736.scope: Deactivated successfully.
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.304132672 +0000 UTC m=+0.141153408 container died b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-44332748045d689b9fcfd94c873b49b8c18ed26233d61bb64cb969fa4c5613c3-merged.mount: Deactivated successfully.
Jan 31 05:56:06 compute-0 podman[98233]: 2026-01-31 05:56:06.342012157 +0000 UTC m=+0.179032873 container remove b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:56:06 compute-0 systemd[1]: libpod-conmon-b9fc197d395c69c9a3df9f82fe82147ef7c30d6574fbe1cdb369a63f9a298736.scope: Deactivated successfully.
Jan 31 05:56:06 compute-0 podman[98272]: 2026-01-31 05:56:06.488733837 +0000 UTC m=+0.051665487 container create 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 05:56:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 05:56:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 05:56:06 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.1b( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.1e( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.17( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.12( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.13( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.10( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.12( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.15( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.11( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.9( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.16( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.8( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.9( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.b( v 40'3 lc 0'0 (0'0,40'3] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.9( empty local-lis/les=48/49 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.d( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.14( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.a( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.5( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.3( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.5( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.5( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.1( empty local-lis/les=48/49 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.7( v 40'2 lc 40'1 (0'0,40'2] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.4( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.7( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.7( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.2( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.1( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.3( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.6( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.d( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.4( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.f( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[4.d( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.f( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.c( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.18( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[2.9( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.1d( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-mon[75251]: pgmap v108: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 11 KiB/s wr, 399 op/s
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.19( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[6.f( v 40'5 lc 40'1 (0'0,40'5] local-lis/les=48/49 n=3 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=40'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 49 pg[5.1a( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [1] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-mon[75251]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 05:56:06 compute-0 ceph-mon[75251]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 05:56:06 compute-0 systemd[1]: Started libpod-conmon-5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6.scope.
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.1d( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.8( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.5( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.7( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.e( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.11( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.16( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[3.18( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [2] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.1c( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.11( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.13( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.a( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.1( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.e( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.1a( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.18( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 49 pg[4.1b( empty local-lis/les=48/49 n=0 ec=37/19 lis/c=37/37 les/c/f=40/40/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.f( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.1b( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.1( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.c( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.3( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.6( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.a( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.17( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.15( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=48) [0] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.12( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.9( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.11( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.13( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.15( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[3.1f( empty local-lis/les=48/49 n=0 ec=36/17 lis/c=36/36 les/c/f=39/39/0 sis=48) [0] r=0 lpr=48 pi=[36,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.8( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.16( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.b( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.1f( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.14( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.3( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.f( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.5( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.2( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.1d( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.1c( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.4( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.7( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.19( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[5.1e( empty local-lis/les=48/49 n=0 ec=37/20 lis/c=37/37 les/c/f=38/38/0 sis=48) [0] r=0 lpr=48 pi=[37,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.2( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 49 pg[2.18( empty local-lis/les=48/49 n=0 ec=34/16 lis/c=34/34 les/c/f=36/36/0 sis=48) [0] r=0 lpr=48 pi=[34,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16ca923503024aef30d967abb8bd9b51a1d1c48f625df2cb39a719999c362c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:06 compute-0 podman[98272]: 2026-01-31 05:56:06.467769509 +0000 UTC m=+0.030701189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16ca923503024aef30d967abb8bd9b51a1d1c48f625df2cb39a719999c362c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16ca923503024aef30d967abb8bd9b51a1d1c48f625df2cb39a719999c362c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16ca923503024aef30d967abb8bd9b51a1d1c48f625df2cb39a719999c362c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:06 compute-0 podman[98272]: 2026-01-31 05:56:06.591848504 +0000 UTC m=+0.154780184 container init 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:56:06 compute-0 podman[98272]: 2026-01-31 05:56:06.600341418 +0000 UTC m=+0.163273068 container start 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:56:06 compute-0 podman[98272]: 2026-01-31 05:56:06.607197967 +0000 UTC m=+0.170129617 container attach 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:56:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v111: 181 pgs: 41 peering, 1 active+clean+scrubbing, 139 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 11 KiB/s wr, 399 op/s
Jan 31 05:56:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:07 compute-0 lvm[98365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:56:07 compute-0 lvm[98365]: VG ceph_vg0 finished
Jan 31 05:56:07 compute-0 lvm[98368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:56:07 compute-0 lvm[98368]: VG ceph_vg1 finished
Jan 31 05:56:07 compute-0 lvm[98370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:56:07 compute-0 lvm[98370]: VG ceph_vg2 finished
Jan 31 05:56:07 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 05:56:07 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 05:56:07 compute-0 admiring_shtern[98289]: {}
Jan 31 05:56:07 compute-0 systemd[1]: libpod-5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6.scope: Deactivated successfully.
Jan 31 05:56:07 compute-0 podman[98272]: 2026-01-31 05:56:07.361169848 +0000 UTC m=+0.924101578 container died 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:56:07 compute-0 systemd[1]: libpod-5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6.scope: Consumed 1.095s CPU time.
Jan 31 05:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f16ca923503024aef30d967abb8bd9b51a1d1c48f625df2cb39a719999c362c2-merged.mount: Deactivated successfully.
Jan 31 05:56:07 compute-0 podman[98272]: 2026-01-31 05:56:07.522698907 +0000 UTC m=+1.085630587 container remove 5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shtern, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 05:56:07 compute-0 systemd[1]: libpod-conmon-5b15d454de66d63fa9b272e0be49da62bc1e26aa12b80c8c0a92f5cf5b1e28c6.scope: Deactivated successfully.
Jan 31 05:56:07 compute-0 ceph-mon[75251]: 4.17 scrub starts
Jan 31 05:56:07 compute-0 ceph-mon[75251]: 4.17 scrub ok
Jan 31 05:56:07 compute-0 sudo[98196]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:56:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:56:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:07 compute-0 sudo[98387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:56:07 compute-0 sudo[98387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:56:07 compute-0 sudo[98387]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 05:56:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 05:56:08 compute-0 ceph-mon[75251]: pgmap v111: 181 pgs: 41 peering, 1 active+clean+scrubbing, 139 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 11 KiB/s wr, 399 op/s
Jan 31 05:56:08 compute-0 ceph-mon[75251]: 5.1c scrub starts
Jan 31 05:56:08 compute-0 ceph-mon[75251]: 5.1c scrub ok
Jan 31 05:56:08 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:08 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v112: 181 pgs: 41 peering, 140 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 8.6 KiB/s wr, 473 op/s; 104 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 05:56:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 05:56:09 compute-0 ceph-mon[75251]: 2.1a scrub starts
Jan 31 05:56:09 compute-0 ceph-mon[75251]: 2.1a scrub ok
Jan 31 05:56:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 05:56:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 05:56:10 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 12 completed events
Jan 31 05:56:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:56:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:10 compute-0 ceph-mon[75251]: pgmap v112: 181 pgs: 41 peering, 140 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 8.6 KiB/s wr, 473 op/s; 104 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:10 compute-0 ceph-mon[75251]: 4.16 scrub starts
Jan 31 05:56:10 compute-0 ceph-mon[75251]: 4.16 scrub ok
Jan 31 05:56:10 compute-0 ceph-mon[75251]: 7.1e scrub starts
Jan 31 05:56:10 compute-0 ceph-mon[75251]: 7.1e scrub ok
Jan 31 05:56:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:56:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v113: 181 pgs: 41 peering, 140 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 8.2 KiB/s wr, 455 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:12 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 31 05:56:12 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 31 05:56:12 compute-0 ceph-mon[75251]: pgmap v113: 181 pgs: 41 peering, 140 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 232 KiB/s rd, 8.2 KiB/s wr, 455 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v114: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 156 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 05:56:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:56:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 05:56:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 05:56:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 05:56:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:56:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 05:56:13 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 05:56:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.e( v 40'3 (0'0,40'3] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.447887421s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=40'2 lcod 40'2 active pruub 106.852066040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.447807312s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=0'0 active pruub 106.852066040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.e( v 40'3 (0'0,40'3] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.447824478s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=40'2 lcod 40'2 unknown NOTIFY pruub 106.852066040s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.447791100s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=0'0 unknown NOTIFY pruub 106.852066040s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.6( v 42'1 (0'0,42'1] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.446971893s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=42'1 lcod 0'0 active pruub 106.851341248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.6( v 42'1 (0'0,42'1] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.446931839s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=42'1 lcod 0'0 unknown NOTIFY pruub 106.851341248s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:13 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 50 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.445801735s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=0'0 lcod 0'0 active pruub 106.850479126s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 50 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50 pruub=12.445731163s) [1] r=-1 lpr=50 pi=[39,50)/1 crt=0'0 lcod 0'0 unknown NOTIFY pruub 106.850479126s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:13 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 50 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:13 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 50 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 05:56:13 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 50 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:13 compute-0 ceph-mon[75251]: 4.15 scrub starts
Jan 31 05:56:13 compute-0 ceph-mon[75251]: 4.15 scrub ok
Jan 31 05:56:13 compute-0 ceph-mon[75251]: pgmap v114: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 156 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:56:14 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 05:56:14 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 05:56:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 05:56:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 05:56:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 05:56:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 05:56:14 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 05:56:14 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 51 pg[6.2( empty local-lis/les=50/51 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:14 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 51 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=50/51 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=40'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:14 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 51 pg[6.e( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=50/51 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:14 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 51 pg[6.6( v 42'1 lc 0'0 (0'0,42'1] local-lis/les=50/51 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=50) [1] r=0 lpr=50 pi=[39,50)/1 crt=42'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 5.1f scrub starts
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 5.1f scrub ok
Jan 31 05:56:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:56:14 compute-0 ceph-mon[75251]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 3.1a scrub starts
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 3.1a scrub ok
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 5.10 scrub starts
Jan 31 05:56:14 compute-0 ceph-mon[75251]: 5.10 scrub ok
Jan 31 05:56:14 compute-0 ceph-mon[75251]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v117: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 156 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 05:56:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 05:56:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 05:56:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 05:56:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:56:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 05:56:15 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 05:56:15 compute-0 ceph-mon[75251]: 7.1d scrub starts
Jan 31 05:56:15 compute-0 ceph-mon[75251]: 7.1d scrub ok
Jan 31 05:56:15 compute-0 ceph-mon[75251]: pgmap v117: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 0 B/s wr, 156 op/s; 100 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:56:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 05:56:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 05:56:17 compute-0 ceph-mon[75251]: 7.12 scrub starts
Jan 31 05:56:17 compute-0 ceph-mon[75251]: 7.12 scrub ok
Jan 31 05:56:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:56:17 compute-0 ceph-mon[75251]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 05:56:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v119: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 05:56:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:56:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:17 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 31 05:56:17 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.7( v 40'2 (0'0,40'2] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.619194984s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'2 active pruub 106.524963379s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.618795395s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'3 active pruub 106.524604797s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.7( v 40'2 (0'0,40'2] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.619135857s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'2 unknown NOTIFY pruub 106.524963379s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.618746758s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'3 unknown NOTIFY pruub 106.524604797s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.3( v 40'2 (0'0,40'2] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.621659279s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'2 active pruub 106.527626038s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.3( v 40'2 (0'0,40'2] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.621617317s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'2 unknown NOTIFY pruub 106.527626038s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=48/49 n=3 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.622395515s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'5 active pruub 106.528770447s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:17 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 52 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=48/49 n=3 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=12.622299194s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=40'5 unknown NOTIFY pruub 106.528770447s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:17 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:17 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:17 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:17 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 05:56:18 compute-0 ceph-mon[75251]: 4.c scrub starts
Jan 31 05:56:18 compute-0 ceph-mon[75251]: 4.c scrub ok
Jan 31 05:56:18 compute-0 ceph-mon[75251]: pgmap v119: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:56:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:56:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.c( v 40'2 (0'0,40'2] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=8.288843155s) [1] r=-1 lpr=53 pi=[39,53)/1 crt=40'2 lcod 40'1 active pruub 106.852226257s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.c( v 40'2 (0'0,40'2] local-lis/les=39/40 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=8.288761139s) [1] r=-1 lpr=53 pi=[39,53)/1 crt=40'2 lcod 40'1 unknown NOTIFY pruub 106.852226257s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.4( v 40'6 (0'0,40'6] local-lis/les=39/40 n=4 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=8.286702156s) [1] r=-1 lpr=53 pi=[39,53)/1 crt=40'6 lcod 40'5 active pruub 106.850631714s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.4( v 40'6 (0'0,40'6] local-lis/les=39/40 n=4 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=8.286643982s) [1] r=-1 lpr=53 pi=[39,53)/1 crt=40'6 lcod 40'5 unknown NOTIFY pruub 106.850631714s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:18 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 05:56:18 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 53 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:18 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 53 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.f( v 40'5 lc 40'1 (0'0,40'5] local-lis/les=52/53 n=3 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=40'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.b( v 40'3 lc 0'0 (0'0,40'3] local-lis/les=52/53 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=40'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.7( v 40'2 lc 40'1 (0'0,40'2] local-lis/les=52/53 n=1 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=40'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 53 pg[6.3( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=52/53 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=40'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 05:56:19 compute-0 ceph-mon[75251]: 4.0 scrub starts
Jan 31 05:56:19 compute-0 ceph-mon[75251]: 4.0 scrub ok
Jan 31 05:56:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:56:19 compute-0 ceph-mon[75251]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 05:56:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 05:56:19 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 05:56:19 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 54 pg[6.4( v 40'6 lc 40'1 (0'0,40'6] local-lis/les=53/54 n=4 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=40'6 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:19 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 54 pg[6.c( v 40'2 lc 40'1 (0'0,40'2] local-lis/les=53/54 n=1 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=40'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v122: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:56:19 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 05:56:19 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 05:56:20 compute-0 ceph-mon[75251]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 05:56:20 compute-0 ceph-mon[75251]: pgmap v122: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:56:20 compute-0 ceph-mon[75251]: 2.14 scrub starts
Jan 31 05:56:20 compute-0 ceph-mon[75251]: 2.14 scrub ok
Jan 31 05:56:20 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 05:56:20 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 05:56:20 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 05:56:20 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 05:56:21 compute-0 ceph-mon[75251]: 2.12 scrub starts
Jan 31 05:56:21 compute-0 ceph-mon[75251]: 2.12 scrub ok
Jan 31 05:56:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v123: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:56:21 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 05:56:21 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 05:56:22 compute-0 ceph-mon[75251]: 4.3 scrub starts
Jan 31 05:56:22 compute-0 ceph-mon[75251]: 4.3 scrub ok
Jan 31 05:56:22 compute-0 ceph-mon[75251]: pgmap v123: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:56:22 compute-0 ceph-mon[75251]: 2.10 scrub starts
Jan 31 05:56:22 compute-0 ceph-mon[75251]: 2.10 scrub ok
Jan 31 05:56:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 05:56:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 05:56:23 compute-0 ceph-mon[75251]: 4.19 scrub starts
Jan 31 05:56:23 compute-0 ceph-mon[75251]: 4.19 scrub ok
Jan 31 05:56:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v124: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:23 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 05:56:23 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 05:56:23 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 05:56:23 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 05:56:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 05:56:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 05:56:24 compute-0 ceph-mon[75251]: pgmap v124: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:24 compute-0 ceph-mon[75251]: 5.17 scrub starts
Jan 31 05:56:24 compute-0 ceph-mon[75251]: 5.17 scrub ok
Jan 31 05:56:24 compute-0 ceph-mon[75251]: 4.6 scrub starts
Jan 31 05:56:24 compute-0 ceph-mon[75251]: 4.6 scrub ok
Jan 31 05:56:25 compute-0 ceph-mon[75251]: 3.14 scrub starts
Jan 31 05:56:25 compute-0 ceph-mon[75251]: 3.14 scrub ok
Jan 31 05:56:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v125: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 05:56:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:56:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 05:56:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 05:56:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 05:56:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 05:56:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 05:56:26 compute-0 ceph-mon[75251]: pgmap v125: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:56:26 compute-0 ceph-mon[75251]: 5.8 scrub starts
Jan 31 05:56:26 compute-0 ceph-mon[75251]: 5.8 scrub ok
Jan 31 05:56:26 compute-0 ceph-mon[75251]: 4.b scrub starts
Jan 31 05:56:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:56:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 05:56:26 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 05:56:26 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 55 pg[6.5( v 40'3 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=12.340978622s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=40'3 active pruub 114.525123596s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:26 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 55 pg[6.5( v 40'3 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=12.340788841s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=40'3 unknown NOTIFY pruub 114.525123596s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:26 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 55 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=12.343385696s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=40'3 active pruub 114.527931213s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:26 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 55 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=12.343108177s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=40'3 unknown NOTIFY pruub 114.527931213s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:26 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 55 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:26 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 55 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:26 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 05:56:26 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 05:56:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v127: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 05:56:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 05:56:27 compute-0 ceph-mon[75251]: 4.b scrub ok
Jan 31 05:56:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:56:27 compute-0 ceph-mon[75251]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 05:56:27 compute-0 ceph-mon[75251]: 7.17 scrub starts
Jan 31 05:56:27 compute-0 ceph-mon[75251]: 7.17 scrub ok
Jan 31 05:56:27 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 05:56:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 56 pg[6.d( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=55/56 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 56 pg[6.5( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=55/56 n=2 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 05:56:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 05:56:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 05:56:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 05:56:28 compute-0 ceph-mon[75251]: pgmap v127: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:28 compute-0 ceph-mon[75251]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 05:56:28 compute-0 ceph-mon[75251]: 7.4 scrub starts
Jan 31 05:56:28 compute-0 ceph-mon[75251]: 7.4 scrub ok
Jan 31 05:56:28 compute-0 ceph-mon[75251]: 7.10 scrub starts
Jan 31 05:56:28 compute-0 ceph-mon[75251]: 7.10 scrub ok
Jan 31 05:56:28 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 05:56:28 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 05:56:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v129: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:29 compute-0 ceph-mon[75251]: 3.f scrub starts
Jan 31 05:56:29 compute-0 ceph-mon[75251]: 3.f scrub ok
Jan 31 05:56:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 05:56:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 05:56:30 compute-0 ceph-mon[75251]: pgmap v129: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 05:56:30 compute-0 ceph-mon[75251]: 7.16 scrub starts
Jan 31 05:56:30 compute-0 ceph-mon[75251]: 7.16 scrub ok
Jan 31 05:56:30 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 05:56:30 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 05:56:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 05:56:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 05:56:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 05:56:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 05:56:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v130: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 0 objects/s recovering
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 2.e scrub starts
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 2.e scrub ok
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 3.1b scrub starts
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 3.1b scrub ok
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 3.10 scrub starts
Jan 31 05:56:31 compute-0 ceph-mon[75251]: 3.10 scrub ok
Jan 31 05:56:31 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 05:56:31 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 05:56:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:32 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 05:56:32 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 05:56:32 compute-0 ceph-mon[75251]: pgmap v130: 181 pgs: 2 peering, 179 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 0 objects/s recovering
Jan 31 05:56:32 compute-0 ceph-mon[75251]: 7.14 scrub starts
Jan 31 05:56:32 compute-0 ceph-mon[75251]: 7.14 scrub ok
Jan 31 05:56:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 05:56:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 05:56:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v131: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 05:56:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:56:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 05:56:33 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:56:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 05:56:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 05:56:33 compute-0 ceph-mon[75251]: 5.a scrub starts
Jan 31 05:56:33 compute-0 ceph-mon[75251]: 5.a scrub ok
Jan 31 05:56:33 compute-0 ceph-mon[75251]: 7.b scrub starts
Jan 31 05:56:33 compute-0 ceph-mon[75251]: 7.b scrub ok
Jan 31 05:56:33 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:56:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 05:56:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 05:56:34 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 31 05:56:34 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 31 05:56:34 compute-0 ceph-mon[75251]: pgmap v131: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:34 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:56:34 compute-0 ceph-mon[75251]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 05:56:34 compute-0 ceph-mon[75251]: 7.9 scrub starts
Jan 31 05:56:34 compute-0 ceph-mon[75251]: 7.9 scrub ok
Jan 31 05:56:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v133: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 05:56:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:56:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 05:56:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:56:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 05:56:35 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 05:56:35 compute-0 ceph-mon[75251]: 2.c scrub starts
Jan 31 05:56:35 compute-0 ceph-mon[75251]: 2.c scrub ok
Jan 31 05:56:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:56:35 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 05:56:35 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 05:56:36 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 05:56:36 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 05:56:36 compute-0 ceph-mon[75251]: pgmap v133: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:56:36 compute-0 ceph-mon[75251]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 05:56:36 compute-0 ceph-mon[75251]: 3.d scrub starts
Jan 31 05:56:36 compute-0 ceph-mon[75251]: 3.d scrub ok
Jan 31 05:56:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 05:56:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 05:56:36 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 05:56:36 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 05:56:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v135: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 05:56:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:56:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 05:56:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:56:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 05:56:37 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 5.b scrub starts
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 5.b scrub ok
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 7.1f scrub starts
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 7.1f scrub ok
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 3.b scrub starts
Jan 31 05:56:37 compute-0 ceph-mon[75251]: 3.b scrub ok
Jan 31 05:56:37 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:56:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 05:56:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 05:56:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 05:56:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 05:56:38 compute-0 ceph-mon[75251]: pgmap v135: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Jan 31 05:56:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:56:38 compute-0 ceph-mon[75251]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 05:56:38 compute-0 ceph-mon[75251]: 3.1 scrub starts
Jan 31 05:56:38 compute-0 ceph-mon[75251]: 3.1 scrub ok
Jan 31 05:56:38 compute-0 ceph-mon[75251]: 3.2 scrub starts
Jan 31 05:56:38 compute-0 ceph-mon[75251]: 3.2 scrub ok
Jan 31 05:56:38 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 05:56:38 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 05:56:38 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 59 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=59 pruub=11.524593353s) [2] r=-1 lpr=59 pi=[39,59)/1 crt=0'0 active pruub 130.851364136s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:38 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 59 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=59 pruub=11.524422646s) [2] r=-1 lpr=59 pi=[39,59)/1 crt=0'0 unknown NOTIFY pruub 130.851364136s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:38 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 59 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=59) [2] r=0 lpr=59 pi=[39,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v137: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 05:56:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:56:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 05:56:39 compute-0 ceph-mon[75251]: 3.c scrub starts
Jan 31 05:56:39 compute-0 ceph-mon[75251]: 3.c scrub ok
Jan 31 05:56:39 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:56:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:56:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 05:56:39 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 05:56:39 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 60 pg[6.9( empty local-lis/les=48/49 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=15.182867050s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 active pruub 130.525238037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:39 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 60 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:39 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 60 pg[6.9( empty local-lis/les=48/49 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=60 pruub=15.182811737s) [0] r=-1 lpr=60 pi=[48,60)/1 crt=0'0 unknown NOTIFY pruub 130.525238037s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:39 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 60 pg[6.8( empty local-lis/les=59/60 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=59) [2] r=0 lpr=59 pi=[39,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:39 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 05:56:39 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 05:56:40 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 31 05:56:40 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 31 05:56:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 05:56:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 05:56:40 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 05:56:40 compute-0 ceph-mon[75251]: pgmap v137: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:56:40 compute-0 ceph-mon[75251]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 05:56:40 compute-0 ceph-mon[75251]: 3.3 scrub starts
Jan 31 05:56:40 compute-0 ceph-mon[75251]: 3.3 scrub ok
Jan 31 05:56:40 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 61 pg[6.9( empty local-lis/les=60/61 n=0 ec=39/22 lis/c=48/48 les/c/f=49/49/0 sis=60) [0] r=0 lpr=60 pi=[48,60)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 05:56:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 05:56:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v140: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 05:56:41 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:56:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 31 05:56:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 31 05:56:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 05:56:41 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:56:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 05:56:41 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 05:56:41 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 62 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=50/51 n=0 ec=39/22 lis/c=50/50 les/c/f=51/51/0 sis=62 pruub=13.501496315s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=40'1 lcod 0'0 active pruub 130.889755249s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:41 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 62 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=50/51 n=0 ec=39/22 lis/c=50/50 les/c/f=51/51/0 sis=62 pruub=13.501450539s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=40'1 lcod 0'0 unknown NOTIFY pruub 130.889755249s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:41 compute-0 ceph-mon[75251]: 2.0 scrub starts
Jan 31 05:56:41 compute-0 ceph-mon[75251]: 2.0 scrub ok
Jan 31 05:56:41 compute-0 ceph-mon[75251]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 05:56:41 compute-0 ceph-mon[75251]: 7.6 scrub starts
Jan 31 05:56:41 compute-0 ceph-mon[75251]: 7.6 scrub ok
Jan 31 05:56:41 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:56:41 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 62 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=50/50 les/c/f=51/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:42 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 05:56:42 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 05:56:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 05:56:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 05:56:42 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 05:56:42 compute-0 ceph-mon[75251]: pgmap v140: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:42 compute-0 ceph-mon[75251]: 5.0 scrub starts
Jan 31 05:56:42 compute-0 ceph-mon[75251]: 5.0 scrub ok
Jan 31 05:56:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:56:42 compute-0 ceph-mon[75251]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 05:56:42 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 63 pg[6.a( v 40'1 (0'0,40'1] local-lis/les=62/63 n=0 ec=39/22 lis/c=50/50 les/c/f=51/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=40'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v143: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:43 compute-0 ceph-mon[75251]: 7.18 scrub starts
Jan 31 05:56:43 compute-0 ceph-mon[75251]: 7.18 scrub ok
Jan 31 05:56:43 compute-0 ceph-mon[75251]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 05:56:44 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 05:56:44 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 05:56:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:56:44
Jan 31 05:56:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:56:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Some PGs (0.005525) are inactive; try again later
Jan 31 05:56:44 compute-0 ceph-mon[75251]: pgmap v143: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:45 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 05:56:45 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v144: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:56:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:56:45 compute-0 sudo[98435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwknkxsuaihwfgmyvcdvcjfnycefgsf ; /usr/bin/python3'
Jan 31 05:56:45 compute-0 sudo[98435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:56:45 compute-0 ceph-mon[75251]: 3.6 scrub starts
Jan 31 05:56:45 compute-0 ceph-mon[75251]: 3.6 scrub ok
Jan 31 05:56:45 compute-0 python3[98437]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:56:45 compute-0 podman[98438]: 2026-01-31 05:56:45.652756026 +0000 UTC m=+0.049715373 container create 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 05:56:45 compute-0 systemd[1]: Started libpod-conmon-3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d.scope.
Jan 31 05:56:45 compute-0 podman[98438]: 2026-01-31 05:56:45.628558599 +0000 UTC m=+0.025517936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:56:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b628055e79cba43bec520af358594efca7a4a70441e28e973e4d05079c57ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b628055e79cba43bec520af358594efca7a4a70441e28e973e4d05079c57ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:45 compute-0 podman[98438]: 2026-01-31 05:56:45.750964517 +0000 UTC m=+0.147923864 container init 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:56:45 compute-0 podman[98438]: 2026-01-31 05:56:45.759246706 +0000 UTC m=+0.156206053 container start 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:45 compute-0 podman[98438]: 2026-01-31 05:56:45.763267237 +0000 UTC m=+0.160226584 container attach 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:45 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 05:56:45 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 05:56:45 compute-0 sleepy_mayer[98454]: could not fetch user info: no user info saved
Jan 31 05:56:46 compute-0 systemd[1]: libpod-3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d.scope: Deactivated successfully.
Jan 31 05:56:46 compute-0 podman[98438]: 2026-01-31 05:56:46.003171399 +0000 UTC m=+0.400130746 container died 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-60b628055e79cba43bec520af358594efca7a4a70441e28e973e4d05079c57ad-merged.mount: Deactivated successfully.
Jan 31 05:56:46 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 05:56:46 compute-0 podman[98438]: 2026-01-31 05:56:46.050918167 +0000 UTC m=+0.447877514 container remove 3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d (image=quay.io/ceph/ceph:v20, name=sleepy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:56:46 compute-0 systemd[1]: libpod-conmon-3df0be9984eefe7c5b446d88df01413df43c48c7ca7cbb5fe67bcc34ba31ad3d.scope: Deactivated successfully.
Jan 31 05:56:46 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 05:56:46 compute-0 sudo[98435]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:46 compute-0 sudo[98575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxrljpxurivmbxcokdamkcwysiefkjxk ; /usr/bin/python3'
Jan 31 05:56:46 compute-0 sudo[98575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:56:46 compute-0 python3[98577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.489752449 +0000 UTC m=+0.070376263 container create 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 05:56:46 compute-0 ceph-mon[75251]: 2.1 scrub starts
Jan 31 05:56:46 compute-0 ceph-mon[75251]: 2.1 scrub ok
Jan 31 05:56:46 compute-0 ceph-mon[75251]: pgmap v144: 181 pgs: 1 peering, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:46 compute-0 ceph-mon[75251]: 3.0 scrub starts
Jan 31 05:56:46 compute-0 ceph-mon[75251]: 3.0 scrub ok
Jan 31 05:56:46 compute-0 systemd[1]: Started libpod-conmon-862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44.scope.
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.457793447 +0000 UTC m=+0.038417351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 05:56:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dc071c00b05798c9297798add6bac1431888cec5315f530a312a26a680286/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662dc071c00b05798c9297798add6bac1431888cec5315f530a312a26a680286/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.577712887 +0000 UTC m=+0.158336741 container init 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.585085271 +0000 UTC m=+0.165709085 container start 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.596536837 +0000 UTC m=+0.177160641 container attach 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:46 compute-0 awesome_kare[98593]: {
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "user_id": "openstack",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "display_name": "openstack",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "email": "",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "suspended": 0,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "max_buckets": 1000,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "subusers": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "keys": [
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         {
Jan 31 05:56:46 compute-0 awesome_kare[98593]:             "user": "openstack",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:             "access_key": "HR9Y19T9MKFP5LYP4UW4",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:             "secret_key": "WoDFSQAByw975INlCpow2SlvFrZpAKComVDCIdGE",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:             "active": true,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:             "create_date": "2026-01-31T05:56:46.790686Z"
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         }
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     ],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "swift_keys": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "caps": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "op_mask": "read, write, delete",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "default_placement": "",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "default_storage_class": "",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "placement_tags": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "bucket_quota": {
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "enabled": false,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "check_on_raw": false,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_size": -1,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_size_kb": 0,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_objects": -1
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     },
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "user_quota": {
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "enabled": false,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "check_on_raw": false,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_size": -1,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_size_kb": 0,
Jan 31 05:56:46 compute-0 awesome_kare[98593]:         "max_objects": -1
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     },
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "temp_url_keys": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "type": "rgw",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "mfa_ids": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "account_id": "",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "path": "/",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "create_date": "2026-01-31T05:56:46.790395Z",
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "tags": [],
Jan 31 05:56:46 compute-0 awesome_kare[98593]:     "group_ids": []
Jan 31 05:56:46 compute-0 awesome_kare[98593]: }
Jan 31 05:56:46 compute-0 awesome_kare[98593]: 
Jan 31 05:56:46 compute-0 systemd[1]: libpod-862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44.scope: Deactivated successfully.
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.821794515 +0000 UTC m=+0.402418329 container died 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 05:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-662dc071c00b05798c9297798add6bac1431888cec5315f530a312a26a680286-merged.mount: Deactivated successfully.
Jan 31 05:56:46 compute-0 podman[98578]: 2026-01-31 05:56:46.869102931 +0000 UTC m=+0.449726745 container remove 862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44 (image=quay.io/ceph/ceph:v20, name=awesome_kare, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:56:46 compute-0 systemd[1]: libpod-conmon-862c5979201a40b32dc0312a829325110a28162b53a086c09e693d71d0984d44.scope: Deactivated successfully.
Jan 31 05:56:46 compute-0 sudo[98575]: pam_unix(sudo:session): session closed for user root
Jan 31 05:56:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 31 05:56:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 31 05:56:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v145: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 05:56:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:56:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 05:56:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 05:56:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 05:56:47 compute-0 ceph-mon[75251]: 5.6 scrub starts
Jan 31 05:56:47 compute-0 ceph-mon[75251]: 5.6 scrub ok
Jan 31 05:56:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:56:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:56:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 05:56:47 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 05:56:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 31 05:56:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 5.e scrub starts
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 5.e scrub ok
Jan 31 05:56:48 compute-0 ceph-mon[75251]: pgmap v145: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 7.3 scrub starts
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 7.3 scrub ok
Jan 31 05:56:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:56:48 compute-0 ceph-mon[75251]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 7.0 scrub starts
Jan 31 05:56:48 compute-0 ceph-mon[75251]: 7.0 scrub ok
Jan 31 05:56:48 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 64 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=52/53 n=1 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=9.041334152s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=40'3 active pruub 138.569412231s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:48 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 64 pg[6.b( v 40'3 (0'0,40'3] local-lis/les=52/53 n=1 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=64 pruub=9.040596008s) [1] r=-1 lpr=64 pi=[52,64)/1 crt=40'3 unknown NOTIFY pruub 138.569412231s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:48 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 64 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v147: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 264 B/s wr, 36 op/s
Jan 31 05:56:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 05:56:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:56:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 05:56:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:56:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:56:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 05:56:49 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 05:56:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 65 pg[6.b( v 40'3 lc 0'0 (0'0,40'3] local-lis/les=64/65 n=1 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=64) [1] r=0 lpr=64 pi=[52,64)/1 crt=40'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:49 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 31 05:56:49 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 31 05:56:50 compute-0 ceph-mon[75251]: pgmap v147: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 264 B/s wr, 36 op/s
Jan 31 05:56:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:56:50 compute-0 ceph-mon[75251]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 05:56:50 compute-0 ceph-mon[75251]: 3.4 scrub starts
Jan 31 05:56:50 compute-0 ceph-mon[75251]: 3.4 scrub ok
Jan 31 05:56:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 05:56:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 05:56:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v149: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s
Jan 31 05:56:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 05:56:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:56:51 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 05:56:51 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 05:56:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 05:56:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:56:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 05:56:51 compute-0 ceph-mon[75251]: 7.7 scrub starts
Jan 31 05:56:51 compute-0 ceph-mon[75251]: 7.7 scrub ok
Jan 31 05:56:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:56:51 compute-0 ceph-mon[75251]: 7.f scrub starts
Jan 31 05:56:51 compute-0 ceph-mon[75251]: 7.f scrub ok
Jan 31 05:56:51 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 66 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=55/56 n=2 ec=39/22 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.621225357s) [1] r=-1 lpr=66 pi=[55,66)/1 crt=40'3 active pruub 147.710800171s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:51 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 66 pg[6.d( v 40'3 (0'0,40'3] local-lis/les=55/56 n=2 ec=39/22 lis/c=55/55 les/c/f=56/56/0 sis=66 pruub=15.620864868s) [1] r=-1 lpr=66 pi=[55,66)/1 crt=40'3 unknown NOTIFY pruub 147.710800171s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:56:51 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 66 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=55/55 les/c/f=56/56/0 sis=66) [1] r=0 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:51 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 05:56:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5919722561849595e-06 of space, bias 4.0, pg target 0.0019103667074219515 quantized to 16 (current 16)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Jan 31 05:56:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:56:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 05:56:52 compute-0 ceph-mon[75251]: pgmap v149: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s
Jan 31 05:56:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:56:52 compute-0 ceph-mon[75251]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 05:56:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 05:56:52 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 05:56:52 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 9a570181-5442-4ab7-a517-b320559f2047 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 05:56:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:56:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:52 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 67 pg[6.d( v 40'3 lc 40'1 (0'0,40'3] local-lis/les=66/67 n=2 ec=39/22 lis/c=55/55 les/c/f=56/56/0 sis=66) [1] r=0 lpr=66 pi=[55,66)/1 crt=40'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v152: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 364 B/s wr, 50 op/s
Jan 31 05:56:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:56:53 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 05:56:53 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 05:56:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:56:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 05:56:53 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 63a06732-f25b-418d-ad09-4b9a464104ca (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 05:56:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:56:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:53 compute-0 ceph-mon[75251]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 05:56:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:56:53 compute-0 ceph-mon[75251]: 3.a scrub starts
Jan 31 05:56:53 compute-0 ceph-mon[75251]: 3.a scrub ok
Jan 31 05:56:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 05:56:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 05:56:54 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 05:56:54 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 05:56:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 05:56:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 05:56:54 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 05:56:54 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev eca978f2-48fc-4018-a5fa-cef55dcd843e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 05:56:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 31 05:56:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 68 pg[8.0( v 40'6 (0'0,40'6] local-lis/les=39/40 n=6 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=68 pruub=11.731965065s) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 40'5 mlcod 40'5 active pruub 142.318206787s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:54 compute-0 ceph-mon[75251]: pgmap v152: 181 pgs: 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 364 B/s wr, 50 op/s
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:56:54 compute-0 ceph-mon[75251]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:54 compute-0 ceph-mon[75251]: 7.1b scrub starts
Jan 31 05:56:54 compute-0 ceph-mon[75251]: 7.1b scrub ok
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:54 compute-0 ceph-mon[75251]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 05:56:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.0( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=68 pruub=11.731965065s) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 40'5 mlcod 0'0 unknown pruub 142.318206787s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x562644652000) split_cache   moving buffer(0x562643ff0d80 space 0x562643608540 0x0~2e clean)
Jan 31 05:56:54 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x562644652000) split_cache   moving buffer(0x56264409cb00 space 0x562645569140 0x0~2e clean)
Jan 31 05:56:54 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x562644652000) split_cache   moving buffer(0x562643ff1a80 space 0x562643610240 0x0~1b4 clean)
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1e( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.16( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.f( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.d( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.6( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1f( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.c( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.b( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.7( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1d( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1b( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.12( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.9( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.15( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.8( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1a( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.17( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.18( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.a( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.11( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.e( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.19( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.5( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1c( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.13( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.14( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.4( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.10( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.1( v 40'6 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.3( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:54 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 69 pg[8.2( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=39/40 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v155: 212 pgs: 31 unknown, 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:56:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Jan 31 05:56:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 05:56:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 05:56:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 05:56:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] update: starting ev 8075f40f-54cb-413e-8d37-9025830876b2 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[9.0( v 63'1508 (0'0,63'1508] local-lis/les=41/42 n=242 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=70 pruub=12.917063713s) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 63'1507 mlcod 63'1507 active pruub 144.504592896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 9a570181-5442-4ab7-a517-b320559f2047 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 9a570181-5442-4ab7-a517-b320559f2047 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 63a06732-f25b-418d-ad09-4b9a464104ca (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 63a06732-f25b-418d-ad09-4b9a464104ca (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev eca978f2-48fc-4018-a5fa-cef55dcd843e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event eca978f2-48fc-4018-a5fa-cef55dcd843e (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] complete: finished ev 8075f40f-54cb-413e-8d37-9025830876b2 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 05:56:55 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 8075f40f-54cb-413e-8d37-9025830876b2 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.16( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[9.0( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=70 pruub=12.917063713s) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 63'1507 mlcod 0'0 unknown pruub 144.504592896s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.e( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.3( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.8( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.17( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.0( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 40'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643f7f780 space 0x562645935d40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.7( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643f7fa00 space 0x5626458f5a40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.5( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.19( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643958280 space 0x562645243140 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643fe5000 space 0x56264594c240 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440aff00 space 0x562645903740 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643f23500 space 0x5626458ff740 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1e( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.13( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264409c500 space 0x562643e4a540 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e100 space 0x56264524dd40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f800 space 0x562645231740 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.a( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 70 pg[8.1a( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=39/39 les/c/f=40/40/0 sis=68) [1] r=0 lpr=68 pi=[39,68)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401fc00 space 0x56264524cb40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e580 space 0x562645199a40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440c8080 space 0x562643609d40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644066380 space 0x562643e4f140 0x0~1c clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440d6180 space 0x5626458ed740 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644040180 space 0x56264524d440 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e380 space 0x56264526a540 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440d6c00 space 0x5626457c7d40 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264404b400 space 0x562645690840 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264409d400 space 0x56264591a540 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f100 space 0x56264526b740 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264409d600 space 0x562643e4ae40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644040a80 space 0x562645261740 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e180 space 0x56264526ae40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644022500 space 0x5626444b7a40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644040380 space 0x562645260540 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643959b00 space 0x5626457c7440 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440d6b00 space 0x56264569ae40 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644041a00 space 0x562643e4c840 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643958d80 space 0x562643d16540 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644053a80 space 0x56264569a540 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f280 space 0x5626458f2540 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644041580 space 0x562645260e40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e000 space 0x5626458ed140 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440d6700 space 0x562645925140 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644023880 space 0x562645230540 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264409d680 space 0x5626458e8840 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e780 space 0x562645199140 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401ec00 space 0x56264568ab40 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643fe4a80 space 0x5626456b2240 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643f7f000 space 0x56264593e240 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264404bf80 space 0x56264593eb40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644041480 space 0x562643e4da40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264409cb80 space 0x562643e4b740 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440d6c80 space 0x562645268e40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644022900 space 0x5626444b7140 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440ccc80 space 0x56264591ae40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643959300 space 0x562645908b40 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x5626440a8480 space 0x5626456b1740 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643fea900 space 0x562645198840 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643959800 space 0x5626458f9740 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264404b000 space 0x562643694b40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f400 space 0x5626458fe540 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f300 space 0x562643688240 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401f980 space 0x562645934540 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644053d00 space 0x5626458f3740 0x0~9a clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643ff1300 space 0x562645230e40 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264401e880 space 0x562643695440 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562644040580 space 0x562643e4d140 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x562643958880 space 0x5626457cb140 0x0~98 clean)
Jan 31 05:56:55 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5626441a2900) split_cache   moving buffer(0x56264404ae00 space 0x562643694240 0x0~6e clean)
Jan 31 05:56:55 compute-0 ceph-mon[75251]: 5.d scrub starts
Jan 31 05:56:55 compute-0 ceph-mon[75251]: 5.d scrub ok
Jan 31 05:56:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:55 compute-0 ceph-mon[75251]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 05:56:55 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 70 pg[10.0( v 63'66 (0'0,63'66] local-lis/les=43/44 n=9 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=70 pruub=14.644083023s) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 63'65 mlcod 63'65 active pruub 139.508224487s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:55 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 70 pg[10.0( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=70 pruub=14.644083023s) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 63'65 mlcod 0'0 unknown pruub 139.508224487s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 05:56:56 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 05:56:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 05:56:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 05:56:56 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.14( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.16( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.17( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.15( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.11( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.3( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1e( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.19( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.d( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.b( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.a( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.13( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.12( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.d( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.c( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.11( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.10( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1f( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.f( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1b( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1c( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1a( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.9( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.18( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1d( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.6( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.5( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.4( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.8( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.f( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.9( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.2( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.e( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.e( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.c( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1( v 63'66 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.a( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.2( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.3( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.14( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-mon[75251]: pgmap v155: 212 pgs: 31 unknown, 181 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.15( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.7( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.16( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.17( v 63'66 lc 0'0 (0'0,63'66] local-lis/les=43/44 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.8( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.6( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-mon[75251]: 7.13 scrub starts
Jan 31 05:56:56 compute-0 ceph-mon[75251]: 7.13 scrub ok
Jan 31 05:56:56 compute-0 ceph-mon[75251]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.4( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.7( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.5( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.18( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.19( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1a( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1c( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1f( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1e( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.13( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.10( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.12( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1e( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1d( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.d( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.b( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.13( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.19( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.b( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.a( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1b( v 63'1508 lc 0'0 (0'0,63'1508] local-lis/les=41/42 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.14( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.0( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 63'1507 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.10( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.11( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1f( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1a( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1c( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.18( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1b( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1d( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.12( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.5( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.6( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.a( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.4( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.5( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.2( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.8( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.4( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.f( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.0( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 63'65 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.9( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.e( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1a( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.c( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.1( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.10( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.2( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.3( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.15( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.14( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.16( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.17( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 71 pg[10.7( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=43/43 les/c/f=44/44/0 sis=70) [2] r=0 lpr=70 pi=[43,70)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.12( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:56 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 71 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=41/41 les/c/f=42/42/0 sis=70) [1] r=0 lpr=70 pi=[41,70)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v158: 274 pgs: 1 peering, 93 unknown, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:56:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 05:56:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:56:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 05:56:57 compute-0 ceph-mon[75251]: 2.11 scrub starts
Jan 31 05:56:57 compute-0 ceph-mon[75251]: 2.11 scrub ok
Jan 31 05:56:57 compute-0 ceph-mon[75251]: pgmap v158: 274 pgs: 1 peering, 93 unknown, 180 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:56:57 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 05:56:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 05:56:57 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 05:56:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 05:56:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 05:56:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 05:56:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 05:56:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 05:56:58 compute-0 ceph-mon[75251]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 05:56:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 72 pg[11.0( v 63'2 (0'0,63'2] local-lis/les=45/46 n=2 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=12.906325340s) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 63'1 mlcod 63'1 active pruub 148.398864746s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 72 pg[11.0( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=72 pruub=12.906325340s) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 63'1 mlcod 0'0 unknown pruub 148.398864746s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 05:56:59 compute-0 ceph-mon[75251]: 5.1b scrub starts
Jan 31 05:56:59 compute-0 ceph-mon[75251]: 5.1b scrub ok
Jan 31 05:56:59 compute-0 ceph-mon[75251]: 3.9 scrub starts
Jan 31 05:56:59 compute-0 ceph-mon[75251]: 3.9 scrub ok
Jan 31 05:56:59 compute-0 ceph-mon[75251]: pgmap v160: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:56:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 05:56:59 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.16( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.15( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.17( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.14( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.2( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=1 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.13( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1( v 63'2 (0'0,63'2] local-lis/les=45/46 n=1 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.f( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.e( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.d( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.b( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.8( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.a( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.c( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.3( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.4( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.5( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.6( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.18( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1a( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1b( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.7( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1d( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1c( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1e( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1f( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.12( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.9( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.19( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.10( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.11( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=45/46 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.16( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.15( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.17( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.14( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.0( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 63'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.2( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.13( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.d( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.8( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.a( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.3( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.5( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.4( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.6( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.18( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1d( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.7( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1a( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.9( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.12( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.19( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.10( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.1c( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.c( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:56:59 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 73 pg[11.11( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=45/45 les/c/f=46/46/0 sis=72) [1] r=0 lpr=72 pi=[45,72)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:00 compute-0 sshd-session[98692]: Accepted publickey for zuul from 192.168.122.30 port 56028 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:57:00 compute-0 systemd-logind[797]: New session 33 of user zuul.
Jan 31 05:57:00 compute-0 systemd[1]: Started Session 33 of User zuul.
Jan 31 05:57:00 compute-0 sshd-session[98692]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:57:00 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 16 completed events
Jan 31 05:57:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:57:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:00 compute-0 ceph-mon[75251]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 05:57:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:01 compute-0 python3.9[98845]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:57:01 compute-0 ceph-mon[75251]: pgmap v162: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:02 compute-0 sudo[99061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohypwjodjbijqrnofkcokbkgzbxlxvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839022.130252-27-179961214332582/AnsiballZ_command.py'
Jan 31 05:57:02 compute-0 sudo[99061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:02 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 05:57:02 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 05:57:02 compute-0 python3.9[99063]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:57:03 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 05:57:03 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 05:57:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:03 compute-0 ceph-mon[75251]: 7.d scrub starts
Jan 31 05:57:03 compute-0 ceph-mon[75251]: 7.d scrub ok
Jan 31 05:57:03 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 05:57:03 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 05:57:04 compute-0 ceph-mon[75251]: 2.13 scrub starts
Jan 31 05:57:04 compute-0 ceph-mon[75251]: 2.13 scrub ok
Jan 31 05:57:04 compute-0 ceph-mon[75251]: pgmap v163: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 05:57:05 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=52/53 n=3 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=8.564013481s) [2] r=-1 lpr=74 pi=[52,74)/1 crt=40'5 active pruub 154.569885254s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[6.f( v 40'5 (0'0,40'5] local-lis/les=52/53 n=3 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=8.563448906s) [2] r=-1 lpr=74 pi=[52,74)/1 crt=40'5 unknown NOTIFY pruub 154.569885254s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.171727180s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.118565559s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.590377808s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.118442535s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.590408325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.17( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.374888420s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.846908569s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.15( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.374840736s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.846878052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.118344307s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.590408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.17( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.374821663s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.846908569s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.15( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.374789238s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.846878052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.17( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.118506432s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.590377808s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.117518425s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.590408325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.14( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.14( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373611450s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847000122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.117041588s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.590408325s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.14( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373579025s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847000122s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.2( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373362541s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847091675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.2( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373318672s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847091675s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373161316s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847091675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168290138s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.642166138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1( v 63'2 (0'0,63'2] local-lis/les=72/73 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.373123169s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847091675s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168183327s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.642166138s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168112755s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.642166138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168077469s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.642166138s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168375015s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.642791748s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168337822s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.642791748s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.372613907s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847106934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.372587204s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847106934s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.115756035s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.590484619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.115701675s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.590484619s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168600082s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643524170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.372241020s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847152710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.168560982s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643524170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.372182846s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847152710s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.134883881s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610061646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.134835243s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610061646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.d( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.371859550s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847137451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-mgr[75550]: [progress INFO root] Completed event 90be31e5-f5b2-4b30-9397-6d45e16427ff (Global Recovery Event) in 10 seconds
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.171647072s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643310547s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.d( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.371831894s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847137451s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.167463303s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643035889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.e( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.134455681s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.609863281s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.167422295s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643035889s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.e( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.134221077s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.609863281s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.10( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.133873940s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.609863281s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.133823395s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.609863281s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.14( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.1( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.11( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.370237350s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847152710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.370198250s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847152710s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166307449s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643417358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.8( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369953156s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847167969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166183472s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643417358s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.8( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369914055s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847167969s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132962227s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610275269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132929802s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610275269s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132726669s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610229492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.3( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369651794s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847213745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.3( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369622231s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847213745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132355690s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610198975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166640282s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.644500732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166604996s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.644500732s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.4( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369245529s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847259521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.3( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.f( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132254601s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610229492s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.4( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.369211197s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847259521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.132314682s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610198975s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131896019s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610305786s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.6( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368868828s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847290039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.165201187s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643615723s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.6( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131868362s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610305786s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.165171623s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643615723s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.6( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368828773s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847290039s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.5( v 71'1509 (0'0,71'1509] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.164727211s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 63'1508 active pruub 156.643676758s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.18( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368412971s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847366333s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.18( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368374825s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847366333s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131376266s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610397339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131306648s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610382080s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=68/70 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131345749s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610397339s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.5( v 71'1509 (0'0,71'1509] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.164662361s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 63'1508 unknown NOTIFY pruub 156.643676758s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.131276131s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610382080s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.e( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.130006790s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610427856s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.129975319s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610427856s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.163137436s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.643692017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.163103104s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.643692017s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.e( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1c( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368822098s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849655151s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1c( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.368790627s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849655151s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.c( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366392136s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847290039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.15( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.129464149s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610443115s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.15( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.9( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1a( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366659164s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.847732544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1b( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366011620s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847290039s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.129032135s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610443115s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.367814064s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849563599s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.128744125s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610534668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1e( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.367770195s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849563599s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.162863731s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.644729614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1d( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.128685951s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610534668s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.162822723s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.644729614s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166611671s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.648727417s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=74) [2] r=0 lpr=74 pi=[52,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.166584969s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.648727417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.127943039s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610565186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1a( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365970612s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.847732544s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.127909660s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610565186s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.11( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366781235s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849838257s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.11( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366739273s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849838257s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.127284050s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610595703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.b( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.2( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366106987s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849578857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.12( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365925789s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849594116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.12( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365852356s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849594116s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.126737595s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610534668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.1( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.11( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.126704216s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610534668s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.9( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365546227s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849609375s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.9( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365520477s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849609375s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.10( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365497589s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849761963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.10( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365475655s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849761963s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.3( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.4( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.126124382s) [2] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610595703s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.19( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365074158s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 active pruub 151.849731445s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.19( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.365048409s) [0] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849731445s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[11.1f( v 63'2 (0'0,63'2] local-lis/les=72/73 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74 pruub=10.366062164s) [2] r=-1 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 unknown NOTIFY pruub 151.849578857s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.2( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.f( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.d( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1a( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.125612259s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 active pruub 155.610595703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.163702011s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.648727417s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.163672447s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.648727417s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[8.1a( v 40'6 (0'0,40'6] local-lis/les=68/70 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74 pruub=14.125576973s) [0] r=-1 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 unknown NOTIFY pruub 155.610595703s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.8( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.6( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.d( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.4( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.1c( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.6( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.b( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.9( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.18( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.5( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.18( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.1b( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.1f( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.19( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148334503s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.764587402s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.19( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148294449s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.764587402s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.d( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147774696s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.764297485s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.b( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147733688s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.764297485s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.d( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147724152s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.764297485s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.b( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147699356s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.764297485s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.1d( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.13( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147421837s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.764358521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.11( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.152468681s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769577026s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.13( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.147374153s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.764358521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.11( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.152413368s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769577026s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1e( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.146855354s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.764221191s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1e( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.146749496s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.764221191s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.1e( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.1d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.10( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151418686s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769546509s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.10( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151370049s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769546509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.154287338s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.644897461s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.12( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151182175s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.769714355s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1a( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.150902748s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769607544s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.7( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151617050s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.770370483s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.7( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151576042s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.770370483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1a( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.150794029s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769607544s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.154241562s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.644897461s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.164706230s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 active pruub 156.648849487s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.10( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.12( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.151136398s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.769714355s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.1a( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[11.19( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.6( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.149907112s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769760132s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.6( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.149846077s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769760132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.1b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.1c( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[8.1a( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.d( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.8( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148819923s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769790649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.8( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148785591s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769790649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.f( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148775101s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769866943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.f( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148733139s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769866943s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.9( v 71'67 (0'0,71'67] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148889542s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.770172119s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.9( v 71'67 (0'0,71'67] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148837090s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.770172119s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.4( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.149065018s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.769790649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.11( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.4( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.148324966s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.769790649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.1e( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.7( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.8( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.1b( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.9( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.4( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.157013893s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 156.648849487s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.19( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.b( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.13( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.11( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.12( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.11( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.9( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.10( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[9.b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.e( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.146409035s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.770187378s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[8.12( empty local-lis/les=0/0 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.e( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.146343231s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.770187378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.1a( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145896912s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.770233154s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[11.1f( empty local-lis/les=0/0 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.1( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145804405s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.770233154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.e( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.12( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.2( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145289421s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.770278931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.14( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145232201s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.770278931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.14( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145189285s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.770278931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.6( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.1( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.2( v 63'66 (0'0,63'66] local-lis/les=70/71 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.145094872s) [1] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.770278931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.15( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144818306s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 active pruub 149.770278931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.15( v 71'67 (0'0,71'67] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144769669s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 63'66 unknown NOTIFY pruub 149.770278931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.f( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.17( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144663811s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.770339966s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.16( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144782066s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 active pruub 149.770294189s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.16( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144525528s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.770294189s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.15( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 74 pg[10.17( v 63'66 (0'0,63'66] local-lis/les=70/71 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74 pruub=15.144626617s) [0] r=-1 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 unknown NOTIFY pruub 149.770339966s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.16( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 74 pg[10.17( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.14( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 74 pg[10.2( empty local-lis/les=0/0 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:05 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 05:57:05 compute-0 ceph-mon[75251]: 5.15 scrub starts
Jan 31 05:57:05 compute-0 ceph-mon[75251]: 5.15 scrub ok
Jan 31 05:57:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 05:57:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:57:05 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 05:57:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 05:57:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 05:57:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 05:57:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 05:57:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 05:57:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 05:57:06 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.5( v 71'1509 (0'0,71'1509] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.5( v 71'1509 (0'0,71'1509] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.11( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.5( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.11( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.5( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.9( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.9( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1d( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.3( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.3( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.1b( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] r=-1 lpr=75 pi=[70,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.1a( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.1c( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.1b( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-mon[75251]: pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:57:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 05:57:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:57:07 compute-0 ceph-mon[75251]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 05:57:07 compute-0 ceph-mon[75251]: 3.1f scrub starts
Jan 31 05:57:07 compute-0 ceph-mon[75251]: 3.1f scrub ok
Jan 31 05:57:07 compute-0 ceph-mon[75251]: 2.1e scrub starts
Jan 31 05:57:07 compute-0 ceph-mon[75251]: 2.1e scrub ok
Jan 31 05:57:07 compute-0 ceph-mon[75251]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 05:57:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 16 unknown, 56 peering, 233 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.10( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.1f( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.b( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.12( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.18( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.11( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.12( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.1e( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.11( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.1b( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.1c( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.4( v 40'6 (0'0,40'6] local-lis/les=74/75 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.d( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.8( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.d( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.3( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.2( v 63'2 (0'0,63'2] local-lis/les=74/75 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.9( v 63'2 lc 0'0 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.2( v 40'6 (0'0,40'6] local-lis/les=74/75 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[6.f( v 40'5 lc 40'1 (0'0,40'5] local-lis/les=74/75 n=3 ec=39/22 lis/c=52/52 les/c/f=53/53/0 sis=74) [2] r=0 lpr=74 pi=[52,74)/1 crt=40'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[11.15( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [2] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 75 pg[8.15( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [2] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.1a( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.9( v 71'67 lc 63'58 (0'0,71'67] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.6( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.2( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.4( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.b( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.8( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.15( v 71'67 lc 63'53 (0'0,71'67] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.14( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.6( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=74/75 n=1 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.10( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.4( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.9( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.6( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.7( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.d( v 71'67 lc 63'55 (0'0,71'67] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.17( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.e( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.e( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.f( v 40'6 lc 0'0 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.f( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.c( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.e( v 71'67 lc 63'54 (0'0,71'67] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.1( v 63'2 (0'0,63'2] local-lis/les=74/75 n=1 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.18( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.1a( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.1e( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.17( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.14( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[11.19( v 63'2 (0'0,63'2] local-lis/les=74/75 n=0 ec=72/45 lis/c=72/72 les/c/f=73/73/0 sis=74) [0] r=0 lpr=74 pi=[72,74)/1 crt=63'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.16( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[10.1( v 63'66 (0'0,63'66] local-lis/les=74/75 n=1 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [0] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.1d( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 75 pg[8.1f( v 40'6 (0'0,40'6] local-lis/les=74/75 n=0 ec=68/39 lis/c=68/68 les/c/f=70/70/0 sis=74) [0] r=0 lpr=74 pi=[68,74)/1 crt=40'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.11( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.10( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.13( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.f( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.12( v 71'67 lc 47'17 (0'0,71'67] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.14( v 71'67 lc 63'57 (0'0,71'67] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=71'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.b( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 75 pg[10.19( v 63'66 (0'0,63'66] local-lis/les=74/75 n=0 ec=70/43 lis/c=70/70 les/c/f=71/71/0 sis=74) [1] r=0 lpr=74 pi=[70,74)/1 crt=63'66 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 05:57:07 compute-0 sudo[99088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:57:07 compute-0 sudo[99088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:07 compute-0 sudo[99088]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 05:57:07 compute-0 sudo[99113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 05:57:07 compute-0 sudo[99113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:07 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 05:57:08 compute-0 ceph-mon[75251]: 3.12 scrub starts
Jan 31 05:57:08 compute-0 ceph-mon[75251]: 3.12 scrub ok
Jan 31 05:57:08 compute-0 ceph-mon[75251]: pgmap v167: 305 pgs: 16 unknown, 56 peering, 233 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:08 compute-0 ceph-mon[75251]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 05:57:08 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 05:57:08 compute-0 sudo[99113]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.5( v 71'1509 (0'0,71'1509] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=71'1509 lcod 63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 05:57:08 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 76 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=75) [0]/[1] async=[0] r=0 lpr=75 pi=[70,75)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:57:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:57:08 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:57:08 compute-0 sudo[99170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:57:08 compute-0 sudo[99170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:08 compute-0 sudo[99170]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:08 compute-0 sudo[99195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:57:08 compute-0 sudo[99195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:08 compute-0 podman[99232]: 2026-01-31 05:57:08.782078458 +0000 UTC m=+0.032319477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:08 compute-0 podman[99232]: 2026-01-31 05:57:08.920530375 +0000 UTC m=+0.170771404 container create f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:57:09 compute-0 systemd[1]: Started libpod-conmon-f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71.scope.
Jan 31 05:57:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:09 compute-0 podman[99232]: 2026-01-31 05:57:09.105069655 +0000 UTC m=+0.355310694 container init f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:57:09 compute-0 podman[99232]: 2026-01-31 05:57:09.114548641 +0000 UTC m=+0.364789680 container start f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:57:09 compute-0 podman[99232]: 2026-01-31 05:57:09.120272112 +0000 UTC m=+0.370513131 container attach f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 05:57:09 compute-0 systemd[1]: libpod-f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71.scope: Deactivated successfully.
Jan 31 05:57:09 compute-0 conmon[99248]: conmon f8575b2ec658746cbb5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71.scope/container/memory.events
Jan 31 05:57:09 compute-0 laughing_bassi[99248]: 167 167
Jan 31 05:57:09 compute-0 podman[99232]: 2026-01-31 05:57:09.123832422 +0000 UTC m=+0.374073461 container died f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 05:57:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 56 peering, 233 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/283 objects misplaced (41.343%); 0 B/s, 0 objects/s recovering
Jan 31 05:57:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f6225a348d24807a289126cf72f90e3cb51705b42f8bef6ab929355dcb9662-merged.mount: Deactivated successfully.
Jan 31 05:57:09 compute-0 podman[99232]: 2026-01-31 05:57:09.183099635 +0000 UTC m=+0.433340674 container remove f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:57:09 compute-0 systemd[1]: libpod-conmon-f8575b2ec658746cbb5ab67e5ad975c76452d069d7cb427b9247a12015ac2b71.scope: Deactivated successfully.
Jan 31 05:57:09 compute-0 ceph-mon[75251]: 3.1c scrub starts
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:57:09 compute-0 ceph-mon[75251]: 3.1c scrub ok
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:57:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:57:09 compute-0 podman[99274]: 2026-01-31 05:57:09.32647252 +0000 UTC m=+0.039310425 container create 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 05:57:09 compute-0 podman[99274]: 2026-01-31 05:57:09.311836719 +0000 UTC m=+0.024674634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 05:57:09 compute-0 systemd[1]: Started libpod-conmon-742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3.scope.
Jan 31 05:57:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 05:57:09 compute-0 podman[99274]: 2026-01-31 05:57:09.567975738 +0000 UTC m=+0.280813673 container init 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:57:09 compute-0 podman[99274]: 2026-01-31 05:57:09.577318451 +0000 UTC m=+0.290156366 container start 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 05:57:09 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559759140s) [0] async=[0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 active pruub 160.404388428s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559697151s) [0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404388428s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559813499s) [0] async=[0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 active pruub 160.404602051s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559754372s) [0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404602051s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559608459s) [0] async=[0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 active pruub 160.404647827s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559553146s) [0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404647827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559236526s) [0] async=[0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 active pruub 160.404541016s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.559183121s) [0] r=-1 lpr=77 pi=[70,77)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404541016s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.5( v 76'1510 (0'0,76'1510] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.477597237s) [0] async=[0] r=-1 lpr=77 pi=[70,77)/1 crt=71'1509 lcod 71'1509 active pruub 160.323089600s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 77 pg[9.5( v 76'1510 (0'0,76'1510] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77 pruub=14.477545738s) [0] r=-1 lpr=77 pi=[70,77)/1 crt=71'1509 lcod 71'1509 unknown NOTIFY pruub 160.323089600s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.5( v 76'1510 (0'0,76'1510] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 pct=0'0 crt=71'1509 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.5( v 76'1510 (0'0,76'1510] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=71'1509 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 77 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:09 compute-0 podman[99274]: 2026-01-31 05:57:09.904734931 +0000 UTC m=+0.617572846 container attach 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:57:09 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 05:57:09 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 05:57:10 compute-0 stoic_sanderson[99291]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:57:10 compute-0 stoic_sanderson[99291]: --> All data devices are unavailable
Jan 31 05:57:10 compute-0 systemd[1]: libpod-742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3.scope: Deactivated successfully.
Jan 31 05:57:10 compute-0 podman[99274]: 2026-01-31 05:57:10.085630079 +0000 UTC m=+0.798467994 container died 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 05:57:10 compute-0 ceph-mgr[75550]: [progress INFO root] Writing back 17 completed events
Jan 31 05:57:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 05:57:10 compute-0 sudo[99061]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 05:57:10 compute-0 sshd-session[98695]: Connection closed by 192.168.122.30 port 56028
Jan 31 05:57:10 compute-0 sshd-session[98692]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:57:10 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 31 05:57:10 compute-0 systemd[1]: session-33.scope: Consumed 7.435s CPU time.
Jan 31 05:57:10 compute-0 systemd-logind[797]: Session 33 logged out. Waiting for processes to exit.
Jan 31 05:57:10 compute-0 systemd-logind[797]: Removed session 33.
Jan 31 05:57:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 56 peering, 233 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/283 objects misplaced (41.343%); 0 B/s, 0 objects/s recovering
Jan 31 05:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8597fafddf0ee957230572723fddc760ebf1ae47ade391697ab0f2b67df7c65f-merged.mount: Deactivated successfully.
Jan 31 05:57:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 05:57:11 compute-0 ceph-mon[75251]: pgmap v169: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 56 peering, 233 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/283 objects misplaced (41.343%); 0 B/s, 0 objects/s recovering
Jan 31 05:57:11 compute-0 ceph-mon[75251]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.199773788s) [0] async=[0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 active pruub 160.404647827s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.199656487s) [0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404647827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.197978973s) [0] async=[0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 active pruub 160.404754639s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.197887421s) [0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404754639s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.197775841s) [0] async=[0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 active pruub 160.404800415s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 78 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78 pruub=13.197718620s) [0] r=-1 lpr=78 pi=[70,78)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404800415s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:11 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.5( v 76'1510 (0'0,76'1510] local-lis/les=77/78 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=76'1510 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.11( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.1( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:11 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 78 pg[9.d( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=77) [0] r=0 lpr=77 pi=[70,77)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:11 compute-0 podman[99274]: 2026-01-31 05:57:11.381408512 +0000 UTC m=+2.094246427 container remove 742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_sanderson, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 05:57:11 compute-0 sudo[99195]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:11 compute-0 systemd[1]: libpod-conmon-742ae2be28ccab68aecfcc05e82ae6008bad0d257b1909359f8e3b542fdd1ce3.scope: Deactivated successfully.
Jan 31 05:57:11 compute-0 sudo[99357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:57:11 compute-0 sudo[99357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:11 compute-0 sudo[99357]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:11 compute-0 sudo[99382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:57:11 compute-0 sudo[99382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.839329665 +0000 UTC m=+0.096621753 container create f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.759985058 +0000 UTC m=+0.017277156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:11 compute-0 systemd[1]: Started libpod-conmon-f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9.scope.
Jan 31 05:57:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.913539828 +0000 UTC m=+0.170831886 container init f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 05:57:11 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.922457449 +0000 UTC m=+0.179749537 container start f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:57:11 compute-0 vigorous_swirles[99437]: 167 167
Jan 31 05:57:11 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 05:57:11 compute-0 systemd[1]: libpod-f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9.scope: Deactivated successfully.
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.927236953 +0000 UTC m=+0.184529001 container attach f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.927513261 +0000 UTC m=+0.184805309 container died f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:57:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b141c90796356f1e667567d36dcaf04d23bf19423a2afa846103b2412a1485b-merged.mount: Deactivated successfully.
Jan 31 05:57:11 compute-0 podman[99419]: 2026-01-31 05:57:11.971157196 +0000 UTC m=+0.228449244 container remove f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:57:11 compute-0 systemd[1]: libpod-conmon-f12231f03bdffffd1a862187cfde9adbe36c5663d4c12ff164a2744ac45f26f9.scope: Deactivated successfully.
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.116546667 +0000 UTC m=+0.042245787 container create 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 05:57:12 compute-0 systemd[1]: Started libpod-conmon-9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a.scope.
Jan 31 05:57:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 05:57:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:12 compute-0 ceph-mon[75251]: 3.1e scrub starts
Jan 31 05:57:12 compute-0 ceph-mon[75251]: 3.1e scrub ok
Jan 31 05:57:12 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:12 compute-0 ceph-mon[75251]: pgmap v171: 305 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 56 peering, 233 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/283 objects misplaced (41.343%); 0 B/s, 0 objects/s recovering
Jan 31 05:57:12 compute-0 ceph-mon[75251]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 05:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b35df577d8cf91e11d12b34a88277206f690ac487ddafa828e270596e4e6f96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b35df577d8cf91e11d12b34a88277206f690ac487ddafa828e270596e4e6f96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b35df577d8cf91e11d12b34a88277206f690ac487ddafa828e270596e4e6f96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b35df577d8cf91e11d12b34a88277206f690ac487ddafa828e270596e4e6f96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.09919183 +0000 UTC m=+0.024890960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.203368187s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.405120850s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.203268051s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.405120850s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202148438s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.404449463s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202253342s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.404861450s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202058792s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404449463s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202213287s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404861450s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202242851s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.405395508s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.202178955s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.405395508s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.201386452s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.404876709s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.201304436s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404876709s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.201569557s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.405319214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.201525688s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.405319214s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.200329781s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.404388428s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.200276375s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404388428s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.200675011s) [0] async=[0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 active pruub 160.404998779s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 79 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=75/76 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79 pruub=12.200634003s) [0] r=-1 lpr=79 pi=[70,79)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 160.404998779s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.3( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:12 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 79 pg[9.1b( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=78) [0] r=0 lpr=78 pi=[70,78)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.216000088 +0000 UTC m=+0.141699188 container init 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.221805192 +0000 UTC m=+0.147504282 container start 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.226821032 +0000 UTC m=+0.152520132 container attach 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:57:12 compute-0 blissful_murdock[99476]: {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     "0": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "devices": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "/dev/loop3"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             ],
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_name": "ceph_lv0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_size": "21470642176",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "name": "ceph_lv0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "tags": {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_name": "ceph",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.crush_device_class": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.encrypted": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.objectstore": "bluestore",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_id": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.vdo": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.with_tpm": "0"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             },
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "vg_name": "ceph_vg0"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         }
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     ],
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     "1": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "devices": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "/dev/loop4"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             ],
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_name": "ceph_lv1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_size": "21470642176",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "name": "ceph_lv1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "tags": {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_name": "ceph",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.crush_device_class": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.encrypted": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.objectstore": "bluestore",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_id": "1",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.vdo": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.with_tpm": "0"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             },
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "vg_name": "ceph_vg1"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         }
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     ],
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     "2": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "devices": [
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "/dev/loop5"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             ],
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_name": "ceph_lv2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_size": "21470642176",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "name": "ceph_lv2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "tags": {
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.cluster_name": "ceph",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.crush_device_class": "",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.encrypted": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.objectstore": "bluestore",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osd_id": "2",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.vdo": "0",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:                 "ceph.with_tpm": "0"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             },
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "type": "block",
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:             "vg_name": "ceph_vg2"
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:         }
Jan 31 05:57:12 compute-0 blissful_murdock[99476]:     ]
Jan 31 05:57:12 compute-0 blissful_murdock[99476]: }
Jan 31 05:57:12 compute-0 systemd[1]: libpod-9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a.scope: Deactivated successfully.
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.516883664 +0000 UTC m=+0.442582744 container died 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 05:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b35df577d8cf91e11d12b34a88277206f690ac487ddafa828e270596e4e6f96-merged.mount: Deactivated successfully.
Jan 31 05:57:12 compute-0 podman[99460]: 2026-01-31 05:57:12.555652163 +0000 UTC m=+0.481351233 container remove 9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_murdock, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 05:57:12 compute-0 systemd[1]: libpod-conmon-9bb4ee7f1aa6739266b19c577c38b7c9119b2c4129fd42b568afab5687769d7a.scope: Deactivated successfully.
Jan 31 05:57:12 compute-0 sudo[99382]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:12 compute-0 sudo[99496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:57:12 compute-0 sudo[99496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:12 compute-0 sudo[99496]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:12 compute-0 sudo[99521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:57:12 compute-0 sudo[99521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:12 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 05:57:12 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 05:57:12 compute-0 podman[99559]: 2026-01-31 05:57:12.957230875 +0000 UTC m=+0.039952953 container create a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:57:12 compute-0 systemd[1]: Started libpod-conmon-a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913.scope.
Jan 31 05:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:13.019652127 +0000 UTC m=+0.102374225 container init a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:13.024538954 +0000 UTC m=+0.107261032 container start a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 05:57:13 compute-0 unruffled_euler[99575]: 167 167
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:13.028716561 +0000 UTC m=+0.111438669 container attach a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:57:13 compute-0 systemd[1]: libpod-a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913.scope: Deactivated successfully.
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:13.029946846 +0000 UTC m=+0.112668924 container died a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:12.938757066 +0000 UTC m=+0.021479174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-da10593649bafd7ec01a55617c3abe5622a19b5fb05df8f9df551c1f15eebf83-merged.mount: Deactivated successfully.
Jan 31 05:57:13 compute-0 podman[99559]: 2026-01-31 05:57:13.073866019 +0000 UTC m=+0.156588107 container remove a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:57:13 compute-0 systemd[1]: libpod-conmon-a72cd9f658e8ce7ff5a32bd54a92eced11d2d6696c8aacffe55acb85e4a45913.scope: Deactivated successfully.
Jan 31 05:57:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 8 active+recovery_wait+remapped, 3 peering, 294 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 6.2 KiB/s wr, 369 op/s; 58/283 objects misplaced (20.495%); 547 B/s, 10 objects/s recovering
Jan 31 05:57:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 05:57:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 05:57:13 compute-0 podman[99600]: 2026-01-31 05:57:13.204928038 +0000 UTC m=+0.041239609 container create d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:57:13 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 05:57:13 compute-0 ceph-mon[75251]: 3.1d scrub starts
Jan 31 05:57:13 compute-0 ceph-mon[75251]: 3.1d scrub ok
Jan 31 05:57:13 compute-0 ceph-mon[75251]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 05:57:13 compute-0 ceph-mon[75251]: 7.e scrub starts
Jan 31 05:57:13 compute-0 ceph-mon[75251]: 7.e scrub ok
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.1d( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.9( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.b( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 80 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=75/70 les/c/f=76/71/0 sis=79) [0] r=0 lpr=79 pi=[70,79)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:13 compute-0 systemd[1]: Started libpod-conmon-d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8.scope.
Jan 31 05:57:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb128ab2a78dba9135fce99d1a8c18a7a4ae0878bf0e381b6331a0115b4237c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb128ab2a78dba9135fce99d1a8c18a7a4ae0878bf0e381b6331a0115b4237c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb128ab2a78dba9135fce99d1a8c18a7a4ae0878bf0e381b6331a0115b4237c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb128ab2a78dba9135fce99d1a8c18a7a4ae0878bf0e381b6331a0115b4237c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:57:13 compute-0 podman[99600]: 2026-01-31 05:57:13.186975934 +0000 UTC m=+0.023287525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:57:13 compute-0 podman[99600]: 2026-01-31 05:57:13.290089888 +0000 UTC m=+0.126401469 container init d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:57:13 compute-0 podman[99600]: 2026-01-31 05:57:13.296041465 +0000 UTC m=+0.132353026 container start d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:57:13 compute-0 podman[99600]: 2026-01-31 05:57:13.301505909 +0000 UTC m=+0.137817500 container attach d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:57:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 05:57:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 05:57:13 compute-0 lvm[99695]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:57:13 compute-0 lvm[99696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:57:13 compute-0 lvm[99695]: VG ceph_vg0 finished
Jan 31 05:57:13 compute-0 lvm[99696]: VG ceph_vg1 finished
Jan 31 05:57:13 compute-0 lvm[99698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:57:13 compute-0 lvm[99698]: VG ceph_vg2 finished
Jan 31 05:57:14 compute-0 compassionate_williamson[99617]: {}
Jan 31 05:57:14 compute-0 systemd[1]: libpod-d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8.scope: Deactivated successfully.
Jan 31 05:57:14 compute-0 podman[99701]: 2026-01-31 05:57:14.056767999 +0000 UTC m=+0.017862493 container died d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 05:57:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 05:57:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 05:57:14 compute-0 ceph-mon[75251]: pgmap v174: 305 pgs: 8 active+recovery_wait+remapped, 3 peering, 294 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 6.2 KiB/s wr, 369 op/s; 58/283 objects misplaced (20.495%); 547 B/s, 10 objects/s recovering
Jan 31 05:57:14 compute-0 ceph-mon[75251]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 05:57:14 compute-0 ceph-mon[75251]: 3.8 scrub starts
Jan 31 05:57:14 compute-0 ceph-mon[75251]: 3.8 scrub ok
Jan 31 05:57:14 compute-0 ceph-mon[75251]: 7.19 scrub starts
Jan 31 05:57:14 compute-0 ceph-mon[75251]: 7.19 scrub ok
Jan 31 05:57:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fb128ab2a78dba9135fce99d1a8c18a7a4ae0878bf0e381b6331a0115b4237c-merged.mount: Deactivated successfully.
Jan 31 05:57:14 compute-0 podman[99701]: 2026-01-31 05:57:14.425339184 +0000 UTC m=+0.386433658 container remove d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:57:14 compute-0 systemd[1]: libpod-conmon-d1577efdf791dd18f293c203c3b925cbbea8bc8090ec85e2d254aea710c000e8.scope: Deactivated successfully.
Jan 31 05:57:14 compute-0 sudo[99521]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:57:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:57:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:14 compute-0 sudo[99716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:57:14 compute-0 sudo[99716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:57:14 compute-0 sudo[99716]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 8 active+recovery_wait+remapped, 3 peering, 294 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 58/283 objects misplaced (20.495%); 525 B/s, 9 objects/s recovering
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:57:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 05:57:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 05:57:16 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 31 05:57:16 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 31 05:57:16 compute-0 ceph-mon[75251]: pgmap v176: 305 pgs: 8 active+recovery_wait+remapped, 3 peering, 294 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 58/283 objects misplaced (20.495%); 525 B/s, 9 objects/s recovering
Jan 31 05:57:17 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 05:57:17 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 05:57:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 972 B/s, 21 objects/s recovering
Jan 31 05:57:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 05:57:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:57:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 05:57:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:57:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 05:57:17 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.16 scrub starts
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.16 scrub ok
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.1b scrub starts
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.1b scrub ok
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.17 scrub starts
Jan 31 05:57:17 compute-0 ceph-mon[75251]: 2.17 scrub ok
Jan 31 05:57:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 05:57:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 05:57:18 compute-0 ceph-mon[75251]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 972 B/s, 21 objects/s recovering
Jan 31 05:57:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 05:57:18 compute-0 ceph-mon[75251]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 05:57:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 05:57:18 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 05:57:18 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 05:57:19 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 05:57:19 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 05:57:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 424 B/s, 11 objects/s recovering
Jan 31 05:57:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 05:57:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:57:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 05:57:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:57:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 05:57:19 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 05:57:19 compute-0 ceph-mon[75251]: 2.b scrub starts
Jan 31 05:57:19 compute-0 ceph-mon[75251]: 2.b scrub ok
Jan 31 05:57:19 compute-0 ceph-mon[75251]: 7.c scrub starts
Jan 31 05:57:19 compute-0 ceph-mon[75251]: 4.10 scrub starts
Jan 31 05:57:19 compute-0 ceph-mon[75251]: 4.10 scrub ok
Jan 31 05:57:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 05:57:20 compute-0 ceph-mon[75251]: 7.c scrub ok
Jan 31 05:57:20 compute-0 ceph-mon[75251]: pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 424 B/s, 11 objects/s recovering
Jan 31 05:57:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 05:57:20 compute-0 ceph-mon[75251]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 05:57:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 371 B/s, 9 objects/s recovering
Jan 31 05:57:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 05:57:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:57:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 05:57:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:57:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 05:57:22 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 05:57:22 compute-0 ceph-mon[75251]: pgmap v181: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 371 B/s, 9 objects/s recovering
Jan 31 05:57:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 05:57:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 05:57:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 05:57:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 05:57:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 05:57:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 05:57:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:57:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 05:57:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:57:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 05:57:23 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 05:57:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 05:57:23 compute-0 ceph-mon[75251]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 05:57:23 compute-0 ceph-mon[75251]: 5.14 scrub starts
Jan 31 05:57:23 compute-0 ceph-mon[75251]: 5.14 scrub ok
Jan 31 05:57:23 compute-0 ceph-mon[75251]: 5.13 scrub starts
Jan 31 05:57:23 compute-0 ceph-mon[75251]: 5.13 scrub ok
Jan 31 05:57:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 05:57:23 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 05:57:23 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 05:57:24 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 05:57:24 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 05:57:24 compute-0 ceph-mon[75251]: pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 05:57:24 compute-0 ceph-mon[75251]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 05:57:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 05:57:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:57:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 05:57:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:57:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 05:57:25 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 05:57:25 compute-0 ceph-mon[75251]: 3.7 scrub starts
Jan 31 05:57:25 compute-0 ceph-mon[75251]: 3.7 scrub ok
Jan 31 05:57:25 compute-0 ceph-mon[75251]: 5.3 scrub starts
Jan 31 05:57:25 compute-0 ceph-mon[75251]: 5.3 scrub ok
Jan 31 05:57:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 05:57:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 05:57:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 05:57:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 05:57:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 05:57:26 compute-0 ceph-mon[75251]: pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 05:57:26 compute-0 ceph-mon[75251]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 05:57:26 compute-0 ceph-mon[75251]: 2.1f scrub starts
Jan 31 05:57:26 compute-0 ceph-mon[75251]: 2.1f scrub ok
Jan 31 05:57:27 compute-0 sshd-session[99741]: Accepted publickey for zuul from 192.168.122.30 port 46326 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:57:27 compute-0 systemd-logind[797]: New session 34 of user zuul.
Jan 31 05:57:27 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 31 05:57:27 compute-0 sshd-session[99741]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:57:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 05:57:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 05:57:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 05:57:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:57:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 05:57:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 05:57:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 05:57:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:57:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 05:57:27 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=86 pruub=8.522109985s) [2] r=-1 lpr=86 pi=[78,86)/1 crt=63'1508 active pruub 176.750823975s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=86 pruub=8.522066116s) [2] r=-1 lpr=86 pi=[78,86)/1 crt=63'1508 unknown NOTIFY pruub 176.750823975s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86 pruub=9.522771835s) [2] r=-1 lpr=86 pi=[79,86)/1 crt=63'1508 active pruub 177.751815796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=7 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=86 pruub=15.666674614s) [2] r=-1 lpr=86 pi=[77,86)/1 crt=63'1508 active pruub 183.895996094s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86 pruub=9.522722244s) [2] r=-1 lpr=86 pi=[79,86)/1 crt=63'1508 unknown NOTIFY pruub 177.751815796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86 pruub=9.522847176s) [2] r=-1 lpr=86 pi=[79,86)/1 crt=63'1508 active pruub 177.752197266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86 pruub=9.522826195s) [2] r=-1 lpr=86 pi=[79,86)/1 crt=63'1508 unknown NOTIFY pruub 177.752197266s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 86 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=7 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=86 pruub=15.666390419s) [2] r=-1 lpr=86 pi=[77,86)/1 crt=63'1508 unknown NOTIFY pruub 183.895996094s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:27 compute-0 python3.9[99894]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 05:57:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=86) [2] r=0 lpr=86 pi=[77,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86) [2] r=0 lpr=86 pi=[79,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=86) [2] r=0 lpr=86 pi=[79,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=86) [2] r=0 lpr=86 pi=[78,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 7.2 scrub starts
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 7.2 scrub ok
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 4.12 scrub starts
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 4.12 scrub ok
Jan 31 05:57:27 compute-0 ceph-mon[75251]: pgmap v187: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 2.f scrub starts
Jan 31 05:57:27 compute-0 ceph-mon[75251]: 2.f scrub ok
Jan 31 05:57:27 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 05:57:27 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 85 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.963150024s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 active pruub 180.643157959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 86 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.963096619s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.643157959s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 85 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.963181496s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 active pruub 180.643753052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 85 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.963410378s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 active pruub 180.644088745s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 86 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.963355064s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.644088745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 85 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.968046188s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 active pruub 180.649078369s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 86 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.962832451s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.643753052s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 86 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85 pruub=15.967739105s) [2] r=-1 lpr=85 pi=[70,85)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.649078369s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85) [2] r=0 lpr=86 pi=[70,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.6( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85) [2] r=0 lpr=86 pi=[70,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85) [2] r=0 lpr=86 pi=[70,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 86 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=85) [2] r=0 lpr=86 pi=[70,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 05:57:28 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[78,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[78,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[79,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.7( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[79,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[79,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[79,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[77,87)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 87 pg[9.17( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=87) [2]/[0] r=-1 lpr=87 pi=[77,87)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=87) [2]/[0] r=0 lpr=87 pi=[78,87)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=7 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=87) [2]/[0] r=0 lpr=87 pi=[77,87)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=78/79 n=7 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=87) [2]/[0] r=0 lpr=87 pi=[78,87)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=77/78 n=7 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=87) [2]/[0] r=0 lpr=87 pi=[77,87)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 87 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:28 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 05:57:28 compute-0 ceph-mon[75251]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 05:57:28 compute-0 ceph-mon[75251]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 05:57:28 compute-0 python3.9[100068]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:57:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 05:57:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 05:57:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 05:57:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:57:29 compute-0 sudo[100222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpiuowyzlyslttbqsijgshuicodaucwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839049.1974638-40-218493605878444/AnsiballZ_command.py'
Jan 31 05:57:29 compute-0 sudo[100222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 05:57:29 compute-0 python3.9[100224]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:57:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:57:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 05:57:29 compute-0 sudo[100222]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:29 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.6( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.6( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=-1 lpr=88 pi=[70,88)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-mon[75251]: 7.5 scrub starts
Jan 31 05:57:29 compute-0 ceph-mon[75251]: 7.5 scrub ok
Jan 31 05:57:29 compute-0 ceph-mon[75251]: pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=14.720680237s) [2] r=-1 lpr=88 pi=[70,88)/1 crt=63'1508 lcod 0'0 active pruub 180.643737793s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=14.720543861s) [2] r=-1 lpr=88 pi=[70,88)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.643737793s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2] r=0 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=14.720731735s) [2] r=-1 lpr=88 pi=[70,88)/1 crt=63'1508 lcod 0'0 active pruub 180.644943237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:29 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 88 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88 pruub=14.720682144s) [2] r=-1 lpr=88 pi=[70,88)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 180.644943237s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2] r=0 lpr=88 pi=[70,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 88 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 88 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=77/77 les/c/f=78/78/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[77,87)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 88 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[79,87)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 88 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=78/78 les/c/f=79/79/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[78,87)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:30 compute-0 sudo[100375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bekkhqjsttnlfaoijujwmuwjzcsgjtaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839050.0700831-52-107286943212881/AnsiballZ_stat.py'
Jan 31 05:57:30 compute-0 sudo[100375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:30 compute-0 python3.9[100377]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:57:30 compute-0 sudo[100375]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 05:57:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 05:57:30 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 05:57:30 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:30 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:30 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:30 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.18( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[70,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.18( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[70,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.8( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[70,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.8( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[70,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=87/78 les/c/f=88/79/0 sis=89) [2] r=0 lpr=89 pi=[78,89)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=87/78 les/c/f=88/79/0 sis=89) [2] r=0 lpr=89 pi=[78,89)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=87/78 les/c/f=88/79/0 sis=89 pruub=14.923062325s) [2] async=[2] r=-1 lpr=89 pi=[78,89)/1 crt=63'1508 active pruub 186.462890625s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=87/77 les/c/f=88/78/0 sis=89) [2] r=0 lpr=89 pi=[77,89)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 89 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=87/77 les/c/f=88/78/0 sis=89) [2] r=0 lpr=89 pi=[77,89)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=87/78 les/c/f=88/79/0 sis=89 pruub=14.922987938s) [2] r=-1 lpr=89 pi=[78,89)/1 crt=63'1508 unknown NOTIFY pruub 186.462890625s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89 pruub=14.922396660s) [2] async=[2] r=-1 lpr=89 pi=[79,89)/1 crt=63'1508 active pruub 186.462554932s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89 pruub=14.922349930s) [2] r=-1 lpr=89 pi=[79,89)/1 crt=63'1508 unknown NOTIFY pruub 186.462554932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=87/77 les/c/f=88/78/0 sis=89 pruub=14.922026634s) [2] async=[2] r=-1 lpr=89 pi=[77,89)/1 crt=63'1508 active pruub 186.462524414s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=7 ec=70/41 lis/c=87/77 les/c/f=88/78/0 sis=89 pruub=14.921954155s) [2] r=-1 lpr=89 pi=[77,89)/1 crt=63'1508 unknown NOTIFY pruub 186.462524414s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89 pruub=14.921657562s) [2] async=[2] r=-1 lpr=89 pi=[79,89)/1 crt=63'1508 active pruub 186.462432861s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:31 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 89 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=87/88 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89 pruub=14.921614647s) [2] r=-1 lpr=89 pi=[79,89)/1 crt=63'1508 unknown NOTIFY pruub 186.462432861s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:31 compute-0 ceph-mon[75251]: 7.8 scrub starts
Jan 31 05:57:31 compute-0 ceph-mon[75251]: 7.8 scrub ok
Jan 31 05:57:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 05:57:31 compute-0 ceph-mon[75251]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 05:57:31 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:31 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:31 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:31 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 89 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=88) [2]/[1] async=[2] r=0 lpr=88 pi=[70,88)/2 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 05:57:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:57:31 compute-0 sudo[100529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaadepecmfgcwwxfjfjynzwqerrbzzdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839050.947605-63-163106148830250/AnsiballZ_file.py'
Jan 31 05:57:31 compute-0 sudo[100529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:31 compute-0 python3.9[100531]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:57:31 compute-0 sudo[100529]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 05:57:31 compute-0 sudo[100681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqrzgbbcjnsmzjdgzvvkslmnntyaele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839051.6943092-72-165453391922005/AnsiballZ_file.py'
Jan 31 05:57:31 compute-0 sudo[100681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:57:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 05:57:32 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.971125603s) [2] async=[2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 active pruub 183.038208008s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.971010208s) [2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 183.038208008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.970961571s) [2] async=[2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 active pruub 183.038208008s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.970888138s) [2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 183.038208008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.970738411s) [2] async=[2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 active pruub 183.038208008s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.970690727s) [2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 183.038208008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.953083038s) [2] async=[2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 active pruub 183.020736694s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=88/89 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.952971458s) [2] r=-1 lpr=90 pi=[70,90)/2 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 183.020736694s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.7( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=8 ec=70/41 lis/c=87/79 les/c/f=88/80/0 sis=89) [2] r=0 lpr=89 pi=[79,89)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=87/78 les/c/f=88/79/0 sis=89) [2] r=0 lpr=89 pi=[78,89)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 90 pg[9.17( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=87/77 les/c/f=88/78/0 sis=89) [2] r=0 lpr=89 pi=[77,89)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 90 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[70,89)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:32 compute-0 ceph-mon[75251]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 05:57:32 compute-0 ceph-mon[75251]: pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:32 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 05:57:32 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 05:57:32 compute-0 ceph-mon[75251]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 05:57:32 compute-0 python3.9[100683]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:57:32 compute-0 sudo[100681]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:32 compute-0 python3.9[100833]: ansible-ansible.builtin.service_facts Invoked
Jan 31 05:57:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 05:57:33 compute-0 network[100850]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 05:57:33 compute-0 network[100851]: 'network-scripts' will be removed from distribution in near future.
Jan 31 05:57:33 compute-0 network[100852]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 05:57:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 263 B/s, 6 objects/s recovering
Jan 31 05:57:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 05:57:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 05:57:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 05:57:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 05:57:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 91 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 91 pg[9.6( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 91 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 91 pg[9.e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=8 ec=70/41 lis/c=88/70 les/c/f=89/71/0 sis=90) [2] r=0 lpr=90 pi=[70,90)/2 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 05:57:34 compute-0 ceph-mon[75251]: pgmap v195: 305 pgs: 4 peering, 301 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 263 B/s, 6 objects/s recovering
Jan 31 05:57:34 compute-0 ceph-mon[75251]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 05:57:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 05:57:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 05:57:35 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 92 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=92 pruub=13.064129829s) [2] async=[2] r=-1 lpr=92 pi=[70,92)/1 crt=63'1508 lcod 0'0 active pruub 184.143203735s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:35 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 92 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=92 pruub=13.063914299s) [2] r=-1 lpr=92 pi=[70,92)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 184.143203735s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 4 peering, 299 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7/283 objects misplaced (2.473%); 381 B/s, 8 objects/s recovering
Jan 31 05:57:35 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 92 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=92) [2] r=0 lpr=92 pi=[70,92)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:35 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 92 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=92) [2] r=0 lpr=92 pi=[70,92)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:35 compute-0 ceph-mon[75251]: 5.5 scrub starts
Jan 31 05:57:35 compute-0 ceph-mon[75251]: 5.5 scrub ok
Jan 31 05:57:35 compute-0 ceph-mon[75251]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 05:57:35 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 05:57:35 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 05:57:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 05:57:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 05:57:36 compute-0 python3.9[101112]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:57:36 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 05:57:36 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 93 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=93) [2] r=0 lpr=93 pi=[70,93)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:36 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 93 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=93) [2] r=0 lpr=93 pi=[70,93)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:36 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 93 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=8 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=93 pruub=11.858668327s) [2] async=[2] r=-1 lpr=93 pi=[70,93)/1 crt=63'1508 lcod 0'0 active pruub 184.143249512s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:36 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 93 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=8 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=93 pruub=11.858475685s) [2] r=-1 lpr=93 pi=[70,93)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 184.143249512s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:36 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 93 pg[9.18( v 63'1508 (0'0,63'1508] local-lis/les=92/93 n=7 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=92) [2] r=0 lpr=92 pi=[70,92)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:36 compute-0 ceph-mon[75251]: pgmap v198: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 4 peering, 299 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7/283 objects misplaced (2.473%); 381 B/s, 8 objects/s recovering
Jan 31 05:57:36 compute-0 ceph-mon[75251]: 5.4 scrub starts
Jan 31 05:57:36 compute-0 ceph-mon[75251]: 5.4 scrub ok
Jan 31 05:57:36 compute-0 ceph-mon[75251]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 05:57:36 compute-0 python3.9[101262]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:57:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 05:57:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 05:57:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 265 B/s, 8 objects/s recovering
Jan 31 05:57:37 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 05:57:37 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 94 pg[9.8( v 63'1508 (0'0,63'1508] local-lis/les=93/94 n=8 ec=70/41 lis/c=89/70 les/c/f=90/71/0 sis=93) [2] r=0 lpr=93 pi=[70,93)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:37 compute-0 python3.9[101416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:57:37 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 31 05:57:37 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 31 05:57:38 compute-0 ceph-mon[75251]: pgmap v200: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 265 B/s, 8 objects/s recovering
Jan 31 05:57:38 compute-0 ceph-mon[75251]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 05:57:38 compute-0 sudo[101572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcewfrazidnpwxqiijzvsupxoegvltkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839058.4403505-120-105249502214138/AnsiballZ_setup.py'
Jan 31 05:57:38 compute-0 sudo[101572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:38 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 31 05:57:38 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 05:57:38 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 31 05:57:38 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 05:57:39 compute-0 python3.9[101574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:57:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 229 B/s, 7 objects/s recovering
Jan 31 05:57:39 compute-0 sudo[101572]: pam_unix(sudo:session): session closed for user root
Jan 31 05:57:39 compute-0 sudo[101656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jidqclxjheujptbdydutcukjbwmcfnqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839058.4403505-120-105249502214138/AnsiballZ_dnf.py'
Jan 31 05:57:39 compute-0 sudo[101656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:57:39 compute-0 python3.9[101658]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:57:40 compute-0 ceph-mon[75251]: 7.a scrub starts
Jan 31 05:57:40 compute-0 ceph-mon[75251]: 7.a scrub ok
Jan 31 05:57:40 compute-0 ceph-mon[75251]: 2.15 scrub starts
Jan 31 05:57:40 compute-0 ceph-mon[75251]: 2.15 scrub ok
Jan 31 05:57:40 compute-0 ceph-mon[75251]: pgmap v202: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 229 B/s, 7 objects/s recovering
Jan 31 05:57:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 05:57:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 05:57:40 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 05:57:40 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 05:57:41 compute-0 ceph-mon[75251]: 3.11 scrub starts
Jan 31 05:57:41 compute-0 ceph-mon[75251]: 3.11 scrub ok
Jan 31 05:57:41 compute-0 ceph-mon[75251]: 5.2 scrub starts
Jan 31 05:57:41 compute-0 ceph-mon[75251]: 5.2 scrub ok
Jan 31 05:57:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 5 objects/s recovering
Jan 31 05:57:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 31 05:57:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 31 05:57:42 compute-0 ceph-mon[75251]: 5.11 scrub starts
Jan 31 05:57:42 compute-0 ceph-mon[75251]: 5.11 scrub ok
Jan 31 05:57:42 compute-0 ceph-mon[75251]: pgmap v203: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 152 B/s, 5 objects/s recovering
Jan 31 05:57:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:42 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 05:57:42 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 05:57:42 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 31 05:57:42 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 31 05:57:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 117 B/s, 4 objects/s recovering
Jan 31 05:57:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 05:57:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:57:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 05:57:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:57:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 05:57:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 05:57:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 05:57:43 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 05:57:44 compute-0 ceph-mon[75251]: 3.e scrub starts
Jan 31 05:57:44 compute-0 ceph-mon[75251]: 3.e scrub ok
Jan 31 05:57:44 compute-0 ceph-mon[75251]: 4.8 scrub starts
Jan 31 05:57:44 compute-0 ceph-mon[75251]: 4.8 scrub ok
Jan 31 05:57:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 05:57:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:57:44
Jan 31 05:57:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:57:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:57:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Jan 31 05:57:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 05:57:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:57:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:57:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:57:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:57:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 05:57:45 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 05:57:45 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 05:57:45 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 05:57:45 compute-0 ceph-mon[75251]: 3.5 scrub starts
Jan 31 05:57:45 compute-0 ceph-mon[75251]: 3.5 scrub ok
Jan 31 05:57:45 compute-0 ceph-mon[75251]: pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 117 B/s, 4 objects/s recovering
Jan 31 05:57:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 05:57:45 compute-0 ceph-mon[75251]: 7.1 scrub starts
Jan 31 05:57:45 compute-0 ceph-mon[75251]: 7.1 scrub ok
Jan 31 05:57:45 compute-0 ceph-mon[75251]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 05:57:45 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 05:57:46 compute-0 ceph-mon[75251]: pgmap v206: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 05:57:46 compute-0 ceph-mon[75251]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 05:57:46 compute-0 ceph-mon[75251]: 5.7 scrub starts
Jan 31 05:57:46 compute-0 ceph-mon[75251]: 5.7 scrub ok
Jan 31 05:57:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 05:57:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:57:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 05:57:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 05:57:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 05:57:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 05:57:47 compute-0 ceph-mon[75251]: 2.19 scrub starts
Jan 31 05:57:47 compute-0 ceph-mon[75251]: 2.19 scrub ok
Jan 31 05:57:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:57:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 05:57:47 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 05:57:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:48 compute-0 ceph-mon[75251]: pgmap v208: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 05:57:48 compute-0 ceph-mon[75251]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 05:57:49 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 05:57:49 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 05:57:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 05:57:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 97 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97 pruub=11.342222214s) [2] r=-1 lpr=97 pi=[70,97)/1 crt=63'1508 lcod 0'0 active pruub 196.643569946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 97 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97 pruub=11.342151642s) [2] r=-1 lpr=97 pi=[70,97)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 196.643569946s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 97 pg[9.c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97) [2] r=0 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 97 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97 pruub=11.342941284s) [2] r=-1 lpr=97 pi=[70,97)/1 crt=63'1508 lcod 0'0 active pruub 196.645278931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 97 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97 pruub=11.342886925s) [2] r=-1 lpr=97 pi=[70,97)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 196.645278931s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 97 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=97) [2] r=0 lpr=97 pi=[70,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 05:57:49 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:57:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 05:57:49 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 98 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=-1 lpr=98 pi=[70,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 98 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=-1 lpr=98 pi=[70,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 98 pg[9.c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=-1 lpr=98 pi=[70,98)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 98 pg[9.c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=-1 lpr=98 pi=[70,98)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:49 compute-0 ceph-mon[75251]: 5.16 scrub starts
Jan 31 05:57:49 compute-0 ceph-mon[75251]: 5.16 scrub ok
Jan 31 05:57:49 compute-0 ceph-mon[75251]: pgmap v210: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:49 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 98 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 98 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 98 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:49 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 98 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=70/71 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 05:57:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 05:57:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 05:57:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 05:57:50 compute-0 ceph-mon[75251]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 05:57:50 compute-0 ceph-mon[75251]: 5.9 scrub starts
Jan 31 05:57:50 compute-0 ceph-mon[75251]: 5.9 scrub ok
Jan 31 05:57:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 05:57:50 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 05:57:50 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 99 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=7 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] async=[2] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:50 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 99 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=8 ec=70/41 lis/c=70/70 les/c/f=71/71/0 sis=98) [2]/[1] async=[2] r=0 lpr=98 pi=[70,98)/1 crt=63'1508 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 05:57:51 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 05:57:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 05:57:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:57:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 05:57:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:57:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 05:57:51 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 05:57:51 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 100 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=8 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100 pruub=15.009004593s) [2] async=[2] r=-1 lpr=100 pi=[70,100)/1 crt=63'1508 lcod 0'0 active pruub 202.718826294s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:51 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 100 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=7 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100 pruub=15.004827499s) [2] async=[2] r=-1 lpr=100 pi=[70,100)/1 crt=63'1508 lcod 0'0 active pruub 202.715240479s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:51 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 100 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=8 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100 pruub=15.008534431s) [2] r=-1 lpr=100 pi=[70,100)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 202.718826294s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:51 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 100 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=98/99 n=7 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100 pruub=15.004692078s) [2] r=-1 lpr=100 pi=[70,100)/1 crt=63'1508 lcod 0'0 unknown NOTIFY pruub 202.715240479s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:57:51 compute-0 ceph-mon[75251]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 05:57:51 compute-0 ceph-mon[75251]: 5.12 scrub starts
Jan 31 05:57:51 compute-0 ceph-mon[75251]: 5.12 scrub ok
Jan 31 05:57:51 compute-0 ceph-mon[75251]: pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 05:57:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 100 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 100 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 100 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:57:51 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 100 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=8 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:57:52 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 05:57:52 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 05:57:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 05:57:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 05:57:52 compute-0 ceph-mon[75251]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 05:57:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 05:57:52 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 05:57:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 101 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=7 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:52 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 101 pg[9.c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=8 ec=70/41 lis/c=98/70 les/c/f=99/71/0 sis=100) [2] r=0 lpr=100 pi=[70,100)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:57:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 05:57:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:57:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 05:57:53 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:57:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 05:57:53 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 05:57:53 compute-0 ceph-mon[75251]: 5.1e scrub starts
Jan 31 05:57:53 compute-0 ceph-mon[75251]: 5.1e scrub ok
Jan 31 05:57:53 compute-0 ceph-mon[75251]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 05:57:53 compute-0 ceph-mon[75251]: pgmap v216: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:53 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 05:57:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 05:57:53 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 05:57:54 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 05:57:54 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 05:57:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 05:57:54 compute-0 ceph-mon[75251]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 05:57:54 compute-0 ceph-mon[75251]: 2.18 scrub starts
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 31 05:57:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 05:57:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 05:57:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7891872858344449e-06 of space, bias 4.0, pg target 0.002147024743001334 quantized to 16 (current 16)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:57:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:57:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 05:57:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 05:57:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 05:57:55 compute-0 ceph-mon[75251]: 7.15 scrub starts
Jan 31 05:57:55 compute-0 ceph-mon[75251]: 7.15 scrub ok
Jan 31 05:57:55 compute-0 ceph-mon[75251]: 2.18 scrub ok
Jan 31 05:57:55 compute-0 ceph-mon[75251]: pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:57:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 05:57:55 compute-0 ceph-mon[75251]: 2.2 scrub starts
Jan 31 05:57:55 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 05:57:56 compute-0 ceph-mon[75251]: 2.2 scrub ok
Jan 31 05:57:56 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 05:57:56 compute-0 ceph-mon[75251]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 05:57:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 31 05:57:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 31 05:57:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 2 objects/s recovering
Jan 31 05:57:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 31 05:57:57 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 05:57:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:57:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 05:57:58 compute-0 ceph-mon[75251]: 7.11 scrub starts
Jan 31 05:57:58 compute-0 ceph-mon[75251]: 7.11 scrub ok
Jan 31 05:57:58 compute-0 ceph-mon[75251]: pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 76 B/s, 2 objects/s recovering
Jan 31 05:57:58 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 05:57:58 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 05:57:58 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 05:57:58 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 05:57:58 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 05:57:58 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 05:57:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 05:57:58 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 05:57:59 compute-0 ceph-mon[75251]: 4.9 scrub starts
Jan 31 05:57:59 compute-0 ceph-mon[75251]: 4.9 scrub ok
Jan 31 05:57:59 compute-0 ceph-mon[75251]: 7.1c scrub starts
Jan 31 05:57:59 compute-0 ceph-mon[75251]: 7.1c scrub ok
Jan 31 05:57:59 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 05:57:59 compute-0 ceph-mon[75251]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 05:57:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:57:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 31 05:57:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 05:58:00 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 31 05:58:00 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 31 05:58:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 05:58:00 compute-0 ceph-mon[75251]: pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:58:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 05:58:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 05:58:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 05:58:00 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 05:58:01 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 05:58:01 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 05:58:01 compute-0 ceph-mon[75251]: 2.d scrub starts
Jan 31 05:58:01 compute-0 ceph-mon[75251]: 2.d scrub ok
Jan 31 05:58:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 05:58:01 compute-0 ceph-mon[75251]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 05:58:01 compute-0 ceph-mon[75251]: 4.14 scrub starts
Jan 31 05:58:01 compute-0 ceph-mon[75251]: 4.14 scrub ok
Jan 31 05:58:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:58:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 31 05:58:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 05:58:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 05:58:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 05:58:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 05:58:02 compute-0 ceph-mon[75251]: pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:58:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 05:58:02 compute-0 ceph-mon[75251]: 4.5 scrub starts
Jan 31 05:58:02 compute-0 ceph-mon[75251]: 4.5 scrub ok
Jan 31 05:58:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 05:58:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 05:58:02 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 05:58:02 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 106 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=14.810478210s) [2] r=-1 lpr=106 pi=[79,106)/1 crt=63'1508 active pruub 217.752777100s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:02 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 106 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=14.810436249s) [2] r=-1 lpr=106 pi=[79,106)/1 crt=63'1508 unknown NOTIFY pruub 217.752777100s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:02 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 106 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=106) [2] r=0 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 05:58:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 05:58:02 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 05:58:03 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 107 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[79,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:03 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 107 pg[9.13( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=107) [2]/[0] r=-1 lpr=107 pi=[79,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:03 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 107 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=107) [2]/[0] r=0 lpr=107 pi=[79,107)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:03 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 107 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=107) [2]/[0] r=0 lpr=107 pi=[79,107)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 31 05:58:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 05:58:03 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 05:58:03 compute-0 ceph-mon[75251]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 05:58:03 compute-0 ceph-mon[75251]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 05:58:03 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 05:58:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 05:58:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 05:58:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 05:58:03 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 05:58:04 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 108 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=107/108 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=107) [2]/[0] async=[2] r=0 lpr=107 pi=[79,107)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:04 compute-0 ceph-mon[75251]: pgmap v227: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 05:58:04 compute-0 ceph-mon[75251]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 05:58:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 05:58:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 05:58:05 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 05:58:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 109 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=107/108 n=7 ec=70/41 lis/c=107/79 les/c/f=108/80/0 sis=109 pruub=15.093985558s) [2] async=[2] r=-1 lpr=109 pi=[79,109)/1 crt=63'1508 active pruub 220.657211304s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:05 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 109 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=107/108 n=7 ec=70/41 lis/c=107/79 les/c/f=108/80/0 sis=109 pruub=15.093889236s) [2] r=-1 lpr=109 pi=[79,109)/1 crt=63'1508 unknown NOTIFY pruub 220.657211304s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 109 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=107/79 les/c/f=108/80/0 sis=109) [2] r=0 lpr=109 pi=[79,109)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:05 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 109 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=107/79 les/c/f=108/80/0 sis=109) [2] r=0 lpr=109 pi=[79,109)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 31 05:58:05 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 05:58:06 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 05:58:06 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 05:58:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 05:58:06 compute-0 ceph-mon[75251]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 05:58:06 compute-0 ceph-mon[75251]: pgmap v230: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:06 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 05:58:06 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 05:58:06 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 05:58:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 05:58:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 05:58:06 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 05:58:06 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 110 pg[9.13( v 63'1508 (0'0,63'1508] local-lis/les=109/110 n=7 ec=70/41 lis/c=107/79 les/c/f=108/80/0 sis=109) [2] r=0 lpr=109 pi=[79,109)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 110 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=110 pruub=10.727447510s) [1] r=-1 lpr=110 pi=[79,110)/1 crt=63'1508 active pruub 217.750564575s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:06 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 110 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=110 pruub=10.727382660s) [1] r=-1 lpr=110 pi=[79,110)/1 crt=63'1508 unknown NOTIFY pruub 217.750564575s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:06 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=110) [1] r=0 lpr=110 pi=[79,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 05:58:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 1 objects/s recovering
Jan 31 05:58:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 31 05:58:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 05:58:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 05:58:07 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 05:58:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 111 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=111) [1]/[0] r=0 lpr=111 pi=[79,111)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:07 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 111 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=111) [1]/[0] r=0 lpr=111 pi=[79,111)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[79,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:07 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 111 pg[9.15( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[79,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:07 compute-0 ceph-mon[75251]: 2.3 scrub starts
Jan 31 05:58:07 compute-0 ceph-mon[75251]: 2.3 scrub ok
Jan 31 05:58:07 compute-0 ceph-mon[75251]: 4.1c scrub starts
Jan 31 05:58:07 compute-0 ceph-mon[75251]: 4.1c scrub ok
Jan 31 05:58:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 05:58:07 compute-0 ceph-mon[75251]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 05:58:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:08 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 31 05:58:08 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 31 05:58:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 05:58:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 05:58:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 05:58:08 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 05:58:08 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 05:58:08 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 05:58:08 compute-0 ceph-mon[75251]: pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 1 objects/s recovering
Jan 31 05:58:08 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 05:58:08 compute-0 ceph-mon[75251]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 05:58:08 compute-0 ceph-mon[75251]: 2.5 scrub starts
Jan 31 05:58:08 compute-0 ceph-mon[75251]: 2.5 scrub ok
Jan 31 05:58:08 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 05:58:08 compute-0 ceph-mon[75251]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 05:58:08 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 112 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=111/112 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[79,111)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 objects/s recovering
Jan 31 05:58:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 31 05:58:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 05:58:09 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 112 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=112 pruub=12.390853882s) [0] r=-1 lpr=112 pi=[90,112)/1 crt=63'1508 active pruub 210.867507935s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:09 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 112 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=112 pruub=12.390724182s) [0] r=-1 lpr=112 pi=[90,112)/1 crt=63'1508 unknown NOTIFY pruub 210.867507935s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:09 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 112 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=112) [0] r=0 lpr=112 pi=[90,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 05:58:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 05:58:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 05:58:09 compute-0 ceph-mon[75251]: 6.3 scrub starts
Jan 31 05:58:09 compute-0 ceph-mon[75251]: 6.3 scrub ok
Jan 31 05:58:09 compute-0 ceph-mon[75251]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 objects/s recovering
Jan 31 05:58:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 05:58:09 compute-0 ceph-mon[75251]: 6.7 scrub starts
Jan 31 05:58:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 05:58:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 05:58:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 05:58:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 05:58:10 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 05:58:10 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 31 05:58:10 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 113 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=111/112 n=7 ec=70/41 lis/c=111/79 les/c/f=112/80/0 sis=113 pruub=14.257227898s) [1] async=[1] r=-1 lpr=113 pi=[79,113)/1 crt=63'1508 active pruub 225.313766479s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:10 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 113 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=111/112 n=7 ec=70/41 lis/c=111/79 les/c/f=112/80/0 sis=113 pruub=14.257030487s) [1] r=-1 lpr=113 pi=[79,113)/1 crt=63'1508 unknown NOTIFY pruub 225.313766479s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:10 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 113 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[90,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:10 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 113 pg[9.16( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[90,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:10 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 113 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=113) [0]/[2] r=0 lpr=113 pi=[90,113)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:10 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 113 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=113) [0]/[2] r=0 lpr=113 pi=[90,113)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:10 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 31 05:58:10 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 113 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=111/79 les/c/f=112/80/0 sis=113) [1] r=0 lpr=113 pi=[79,113)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:10 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 113 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=111/79 les/c/f=112/80/0 sis=113) [1] r=0 lpr=113 pi=[79,113)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:11 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 05:58:11 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 05:58:11 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 05:58:11 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 05:58:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Jan 31 05:58:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 31 05:58:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 05:58:11 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 05:58:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 6.7 scrub ok
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 4.13 scrub starts
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 4.13 scrub ok
Jan 31 05:58:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 05:58:11 compute-0 ceph-mon[75251]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 6.0 scrub starts
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 6.0 scrub ok
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 4.4 scrub starts
Jan 31 05:58:11 compute-0 ceph-mon[75251]: 4.4 scrub ok
Jan 31 05:58:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 05:58:11 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 05:58:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 05:58:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 05:58:11 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 05:58:11 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 114 pg[9.15( v 63'1508 (0'0,63'1508] local-lis/les=113/114 n=7 ec=70/41 lis/c=111/79 les/c/f=112/80/0 sis=113) [1] r=0 lpr=113 pi=[79,113)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:12 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 05:58:12 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 05:58:12 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 05:58:12 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 05:58:12 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 114 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=113/114 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[90,113)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 4.11 scrub starts
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 4.11 scrub ok
Jan 31 05:58:13 compute-0 ceph-mon[75251]: pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 52 B/s, 1 objects/s recovering
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 6.a scrub starts
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 6.a scrub ok
Jan 31 05:58:13 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 05:58:13 compute-0 ceph-mon[75251]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 4.7 scrub starts
Jan 31 05:58:13 compute-0 ceph-mon[75251]: 4.7 scrub ok
Jan 31 05:58:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 05:58:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 05:58:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/283 objects misplaced (2.120%)
Jan 31 05:58:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 31 05:58:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 31 05:58:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 05:58:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 4.a scrub starts
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 4.a scrub ok
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 2.7 scrub starts
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 2.7 scrub ok
Jan 31 05:58:14 compute-0 ceph-mon[75251]: pgmap v239: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/283 objects misplaced (2.120%)
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 6.9 scrub starts
Jan 31 05:58:14 compute-0 ceph-mon[75251]: 6.9 scrub ok
Jan 31 05:58:14 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 115 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=113/114 n=7 ec=70/41 lis/c=113/90 les/c/f=114/91/0 sis=115 pruub=14.408432961s) [0] async=[0] r=-1 lpr=115 pi=[90,115)/1 crt=63'1508 active pruub 217.598464966s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:14 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 115 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=113/114 n=7 ec=70/41 lis/c=113/90 les/c/f=114/91/0 sis=115 pruub=14.408330917s) [0] r=-1 lpr=115 pi=[90,115)/1 crt=63'1508 unknown NOTIFY pruub 217.598464966s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:14 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 115 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=113/90 les/c/f=114/91/0 sis=115) [0] r=0 lpr=115 pi=[90,115)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:14 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 115 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=113/90 les/c/f=114/91/0 sis=115) [0] r=0 lpr=115 pi=[90,115)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:14 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 05:58:14 compute-0 sudo[101778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:58:14 compute-0 sudo[101778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:14 compute-0 sudo[101778]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:14 compute-0 sudo[101803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 05:58:14 compute-0 sudo[101803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 05:58:15 compute-0 ceph-mon[75251]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 05:58:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 05:58:15 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 116 pg[9.16( v 63'1508 (0'0,63'1508] local-lis/les=115/116 n=7 ec=70/41 lis/c=113/90 les/c/f=114/91/0 sis=115) [0] r=0 lpr=115 pi=[90,115)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 05:58:15 compute-0 sudo[101803]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/283 objects misplaced (2.120%); 8 B/s, 0 objects/s recovering
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:58:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:58:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:58:15 compute-0 sudo[101860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:58:15 compute-0 sudo[101860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:15 compute-0 sudo[101860]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:15 compute-0 sudo[101885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:58:15 compute-0 sudo[101885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.514448397 +0000 UTC m=+0.039297794 container create dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 05:58:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 05:58:15 compute-0 systemd[1]: Started libpod-conmon-dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131.scope.
Jan 31 05:58:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 05:58:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.588915218 +0000 UTC m=+0.113764625 container init dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.495283239 +0000 UTC m=+0.020132696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.595453452 +0000 UTC m=+0.120302889 container start dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.599707851 +0000 UTC m=+0.124557348 container attach dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:58:15 compute-0 xenodochial_yalow[101940]: 167 167
Jan 31 05:58:15 compute-0 systemd[1]: libpod-dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131.scope: Deactivated successfully.
Jan 31 05:58:15 compute-0 conmon[101940]: conmon dd1993542b6a8ac9811a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131.scope/container/memory.events
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.602242202 +0000 UTC m=+0.127091599 container died dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:58:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c24059cc0c6683d8dbb51dea3fa07782d67876f9b41b18701474b366f84c8805-merged.mount: Deactivated successfully.
Jan 31 05:58:15 compute-0 podman[101924]: 2026-01-31 05:58:15.654089868 +0000 UTC m=+0.178939275 container remove dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 05:58:15 compute-0 systemd[1]: libpod-conmon-dd1993542b6a8ac9811ab2fd90cf9a001d9cc91803675ca58d13a2a8b72d1131.scope: Deactivated successfully.
Jan 31 05:58:15 compute-0 podman[101963]: 2026-01-31 05:58:15.781829246 +0000 UTC m=+0.038392440 container create d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:58:15 compute-0 systemd[1]: Started libpod-conmon-d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65.scope.
Jan 31 05:58:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:15 compute-0 podman[101963]: 2026-01-31 05:58:15.857203971 +0000 UTC m=+0.113767185 container init d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 05:58:15 compute-0 podman[101963]: 2026-01-31 05:58:15.763544532 +0000 UTC m=+0.020107816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:15 compute-0 podman[101963]: 2026-01-31 05:58:15.866491822 +0000 UTC m=+0.123055026 container start d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 05:58:15 compute-0 podman[101963]: 2026-01-31 05:58:15.870036592 +0000 UTC m=+0.126599806 container attach d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:58:16 compute-0 ceph-mon[75251]: 5.1 scrub starts
Jan 31 05:58:16 compute-0 ceph-mon[75251]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 05:58:16 compute-0 ceph-mon[75251]: 5.1 scrub ok
Jan 31 05:58:16 compute-0 ceph-mon[75251]: pgmap v242: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/283 objects misplaced (2.120%); 8 B/s, 0 objects/s recovering
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:58:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:58:16 compute-0 ceph-mon[75251]: 6.5 scrub starts
Jan 31 05:58:16 compute-0 xenodochial_newton[101979]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:58:16 compute-0 xenodochial_newton[101979]: --> All data devices are unavailable
Jan 31 05:58:16 compute-0 systemd[1]: libpod-d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65.scope: Deactivated successfully.
Jan 31 05:58:16 compute-0 podman[101963]: 2026-01-31 05:58:16.314449811 +0000 UTC m=+0.571013045 container died d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b90def11712a9aff393a6abb3b98d417826505979cb0918c098d6c71ed44098d-merged.mount: Deactivated successfully.
Jan 31 05:58:16 compute-0 podman[101963]: 2026-01-31 05:58:16.358508439 +0000 UTC m=+0.615071633 container remove d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 05:58:16 compute-0 systemd[1]: libpod-conmon-d8829946449ba010738e7357296720903f7173db57c10f5c3fb4786d3346cf65.scope: Deactivated successfully.
Jan 31 05:58:16 compute-0 sudo[101885]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:16 compute-0 sudo[102009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:58:16 compute-0 sudo[102009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:16 compute-0 sudo[102009]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:16 compute-0 sudo[102034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:58:16 compute-0 sudo[102034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 05:58:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.796370884 +0000 UTC m=+0.070289634 container create b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 05:58:16 compute-0 systemd[1]: Started libpod-conmon-b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93.scope.
Jan 31 05:58:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.755301361 +0000 UTC m=+0.029220191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.8620867 +0000 UTC m=+0.136005480 container init b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.866161034 +0000 UTC m=+0.140079774 container start b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:58:16 compute-0 kind_khorana[102086]: 167 167
Jan 31 05:58:16 compute-0 systemd[1]: libpod-b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93.scope: Deactivated successfully.
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.869698134 +0000 UTC m=+0.143616924 container attach b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.870168147 +0000 UTC m=+0.144086937 container died b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 05:58:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e5416985029b48027905f40fec5ce2440831da969de391acb4df2175cbb8e83-merged.mount: Deactivated successfully.
Jan 31 05:58:16 compute-0 podman[102069]: 2026-01-31 05:58:16.916534579 +0000 UTC m=+0.190453339 container remove b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 05:58:16 compute-0 systemd[1]: libpod-conmon-b43b766b81e6fe6c6be6fc89a34f501470a2b1b63df475b462119fcdfe3faa93.scope: Deactivated successfully.
Jan 31 05:58:17 compute-0 podman[102108]: 2026-01-31 05:58:17.062820067 +0000 UTC m=+0.037873335 container create 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 05:58:17 compute-0 ceph-mon[75251]: 6.5 scrub ok
Jan 31 05:58:17 compute-0 ceph-mon[75251]: 8.10 scrub starts
Jan 31 05:58:17 compute-0 ceph-mon[75251]: 8.10 scrub ok
Jan 31 05:58:17 compute-0 systemd[1]: Started libpod-conmon-0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632.scope.
Jan 31 05:58:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7627851360e27173fa6139377f3e82a5c1936af9330abfd85a35be31a8507c35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7627851360e27173fa6139377f3e82a5c1936af9330abfd85a35be31a8507c35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7627851360e27173fa6139377f3e82a5c1936af9330abfd85a35be31a8507c35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7627851360e27173fa6139377f3e82a5c1936af9330abfd85a35be31a8507c35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:17 compute-0 podman[102108]: 2026-01-31 05:58:17.045739187 +0000 UTC m=+0.020792445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:17 compute-0 podman[102108]: 2026-01-31 05:58:17.151401624 +0000 UTC m=+0.126454902 container init 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:58:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 31 05:58:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 05:58:17 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 05:58:17 compute-0 podman[102108]: 2026-01-31 05:58:17.160662014 +0000 UTC m=+0.135715252 container start 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 31 05:58:17 compute-0 podman[102108]: 2026-01-31 05:58:17.164319047 +0000 UTC m=+0.139372315 container attach 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 05:58:17 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]: {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     "0": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "devices": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "/dev/loop3"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             ],
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_name": "ceph_lv0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_size": "21470642176",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "name": "ceph_lv0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "tags": {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_name": "ceph",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.crush_device_class": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.encrypted": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.objectstore": "bluestore",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_id": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.vdo": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.with_tpm": "0"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             },
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "vg_name": "ceph_vg0"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         }
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     ],
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     "1": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "devices": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "/dev/loop4"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             ],
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_name": "ceph_lv1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_size": "21470642176",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "name": "ceph_lv1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "tags": {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_name": "ceph",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.crush_device_class": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.encrypted": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.objectstore": "bluestore",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_id": "1",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.vdo": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.with_tpm": "0"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             },
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "vg_name": "ceph_vg1"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         }
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     ],
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     "2": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "devices": [
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "/dev/loop5"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             ],
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_name": "ceph_lv2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_size": "21470642176",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "name": "ceph_lv2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "tags": {
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.cluster_name": "ceph",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.crush_device_class": "",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.encrypted": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.objectstore": "bluestore",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osd_id": "2",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.vdo": "0",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:                 "ceph.with_tpm": "0"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             },
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "type": "block",
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:             "vg_name": "ceph_vg2"
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:         }
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]:     ]
Jan 31 05:58:17 compute-0 jovial_torvalds[102125]: }
Jan 31 05:58:17 compute-0 systemd[1]: libpod-0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632.scope: Deactivated successfully.
Jan 31 05:58:17 compute-0 podman[102134]: 2026-01-31 05:58:17.438005603 +0000 UTC m=+0.016727871 container died 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 05:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7627851360e27173fa6139377f3e82a5c1936af9330abfd85a35be31a8507c35-merged.mount: Deactivated successfully.
Jan 31 05:58:17 compute-0 podman[102134]: 2026-01-31 05:58:17.475035653 +0000 UTC m=+0.053757901 container remove 0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 05:58:17 compute-0 systemd[1]: libpod-conmon-0681a74f0049d8f1e1ef980a4a41cd4b0b3d98fea7aa9de2f5c946f72db67632.scope: Deactivated successfully.
Jan 31 05:58:17 compute-0 sudo[102034]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:17 compute-0 sudo[102149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:58:17 compute-0 sudo[102149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:17 compute-0 sudo[102149]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:17 compute-0 sudo[102174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:58:17 compute-0 sudo[102174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.890033616 +0000 UTC m=+0.042679269 container create 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:58:17 compute-0 systemd[1]: Started libpod-conmon-49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7.scope.
Jan 31 05:58:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.951495592 +0000 UTC m=+0.104141245 container init 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.959314272 +0000 UTC m=+0.111959925 container start 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.864641513 +0000 UTC m=+0.017287196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.962404229 +0000 UTC m=+0.115049942 container attach 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 05:58:17 compute-0 thirsty_elbakyan[102229]: 167 167
Jan 31 05:58:17 compute-0 systemd[1]: libpod-49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7.scope: Deactivated successfully.
Jan 31 05:58:17 compute-0 podman[102212]: 2026-01-31 05:58:17.964909329 +0000 UTC m=+0.117554982 container died 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:58:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-332ea797959292691751e96801c1e5e50fe52450fcb903772bf06a4d3136b686-merged.mount: Deactivated successfully.
Jan 31 05:58:18 compute-0 podman[102212]: 2026-01-31 05:58:18.007021832 +0000 UTC m=+0.159667485 container remove 49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_elbakyan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:58:18 compute-0 systemd[1]: libpod-conmon-49824866adf6a77c14ab3204f3134d597f092be9e45ea84fb5a6b90ed7b8b0d7.scope: Deactivated successfully.
Jan 31 05:58:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 05:58:18 compute-0 ceph-mon[75251]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 05:58:18 compute-0 ceph-mon[75251]: 4.1 scrub starts
Jan 31 05:58:18 compute-0 ceph-mon[75251]: 4.1 scrub ok
Jan 31 05:58:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 05:58:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 05:58:18 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 05:58:18 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.134966294 +0000 UTC m=+0.039550361 container create b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 05:58:18 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 05:58:18 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 05:58:18 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 05:58:18 compute-0 systemd[1]: Started libpod-conmon-b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1.scope.
Jan 31 05:58:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e3b390d8e3b6bb5d43e6032664a528918ab792f0619de6c13d94c6e884ec38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e3b390d8e3b6bb5d43e6032664a528918ab792f0619de6c13d94c6e884ec38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e3b390d8e3b6bb5d43e6032664a528918ab792f0619de6c13d94c6e884ec38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e3b390d8e3b6bb5d43e6032664a528918ab792f0619de6c13d94c6e884ec38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.199310611 +0000 UTC m=+0.103894688 container init b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.208089918 +0000 UTC m=+0.112673975 container start b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.21316846 +0000 UTC m=+0.117752567 container attach b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.117272218 +0000 UTC m=+0.021856295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:58:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 117 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=117 pruub=14.612186432s) [2] r=-1 lpr=117 pi=[79,117)/1 crt=63'1508 active pruub 233.753051758s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:18 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 117 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=117 pruub=14.612139702s) [2] r=-1 lpr=117 pi=[79,117)/1 crt=63'1508 unknown NOTIFY pruub 233.753051758s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:18 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=117) [2] r=0 lpr=117 pi=[79,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:18 compute-0 lvm[102346]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:58:18 compute-0 lvm[102346]: VG ceph_vg0 finished
Jan 31 05:58:18 compute-0 lvm[102349]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:58:18 compute-0 lvm[102349]: VG ceph_vg1 finished
Jan 31 05:58:18 compute-0 lvm[102351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:58:18 compute-0 lvm[102351]: VG ceph_vg2 finished
Jan 31 05:58:18 compute-0 lvm[102352]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:58:18 compute-0 lvm[102352]: VG ceph_vg0 finished
Jan 31 05:58:18 compute-0 vigilant_shaw[102270]: {}
Jan 31 05:58:18 compute-0 systemd[1]: libpod-b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1.scope: Deactivated successfully.
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.880341706 +0000 UTC m=+0.784925873 container died b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:58:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-91e3b390d8e3b6bb5d43e6032664a528918ab792f0619de6c13d94c6e884ec38-merged.mount: Deactivated successfully.
Jan 31 05:58:18 compute-0 podman[102253]: 2026-01-31 05:58:18.925967207 +0000 UTC m=+0.830551254 container remove b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:58:18 compute-0 systemd[1]: libpod-conmon-b7955830bec8a5c9fc8c98cc2d397a42300ab3b890815e9c774bdb472137f3a1.scope: Deactivated successfully.
Jan 31 05:58:18 compute-0 sudo[102174]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:58:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:58:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:19 compute-0 sudo[102376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:58:19 compute-0 sudo[102376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:58:19 compute-0 sudo[102376]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 05:58:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 05:58:19 compute-0 ceph-mon[75251]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 05:58:19 compute-0 ceph-mon[75251]: 2.6 scrub starts
Jan 31 05:58:19 compute-0 ceph-mon[75251]: 4.1b scrub starts
Jan 31 05:58:19 compute-0 ceph-mon[75251]: 4.1b scrub ok
Jan 31 05:58:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:58:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 05:58:19 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=118) [2]/[0] r=-1 lpr=118 pi=[79,118)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:19 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=118) [2]/[0] r=-1 lpr=118 pi=[79,118)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:19 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 05:58:19 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 05:58:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:19 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 118 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=118) [2]/[0] r=0 lpr=118 pi=[79,118)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:19 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 118 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=79/80 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=118) [2]/[0] r=0 lpr=118 pi=[79,118)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 31 05:58:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 05:58:19 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 05:58:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 05:58:20 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 05:58:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 05:58:20 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 05:58:20 compute-0 ceph-mon[75251]: 2.6 scrub ok
Jan 31 05:58:20 compute-0 ceph-mon[75251]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 05:58:20 compute-0 ceph-mon[75251]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 05:58:20 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 119 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=118/119 n=7 ec=70/41 lis/c=79/79 les/c/f=80/80/0 sis=118) [2]/[0] async=[2] r=0 lpr=118 pi=[79,118)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 05:58:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 31 05:58:21 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 05:58:21 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 31 05:58:21 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 31 05:58:21 compute-0 ceph-mon[75251]: 4.f scrub starts
Jan 31 05:58:21 compute-0 ceph-mon[75251]: 4.f scrub ok
Jan 31 05:58:21 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 05:58:21 compute-0 ceph-mon[75251]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 05:58:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 05:58:21 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 05:58:21 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 120 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=118/119 n=7 ec=70/41 lis/c=118/79 les/c/f=119/80/0 sis=120 pruub=14.819360733s) [2] async=[2] r=-1 lpr=120 pi=[79,120)/1 crt=63'1508 active pruub 236.774215698s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:21 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 120 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=118/119 n=7 ec=70/41 lis/c=118/79 les/c/f=119/80/0 sis=120 pruub=14.819273949s) [2] r=-1 lpr=120 pi=[79,120)/1 crt=63'1508 unknown NOTIFY pruub 236.774215698s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:21 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 120 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=118/79 les/c/f=119/80/0 sis=120) [2] r=0 lpr=120 pi=[79,120)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:21 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 120 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=118/79 les/c/f=119/80/0 sis=120) [2] r=0 lpr=120 pi=[79,120)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:21 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 05:58:21 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 05:58:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 05:58:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 05:58:22 compute-0 ceph-mon[75251]: pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 05:58:22 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 05:58:22 compute-0 ceph-mon[75251]: 4.2 scrub starts
Jan 31 05:58:22 compute-0 ceph-mon[75251]: 4.2 scrub ok
Jan 31 05:58:22 compute-0 ceph-mon[75251]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 05:58:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 05:58:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 05:58:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 05:58:22 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 05:58:22 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 121 pg[9.19( v 63'1508 (0'0,63'1508] local-lis/les=120/121 n=7 ec=70/41 lis/c=118/79 les/c/f=119/80/0 sis=120) [2] r=0 lpr=120 pi=[79,120)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 3 objects/s recovering
Jan 31 05:58:23 compute-0 ceph-mon[75251]: 8.b scrub starts
Jan 31 05:58:23 compute-0 ceph-mon[75251]: 8.b scrub ok
Jan 31 05:58:23 compute-0 ceph-mon[75251]: 5.18 scrub starts
Jan 31 05:58:23 compute-0 ceph-mon[75251]: 5.18 scrub ok
Jan 31 05:58:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 05:58:23 compute-0 ceph-mon[75251]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 05:58:25 compute-0 ceph-mon[75251]: pgmap v251: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 3 objects/s recovering
Jan 31 05:58:25 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 05:58:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:58:25 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 05:58:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 05:58:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 05:58:26 compute-0 ceph-mon[75251]: pgmap v252: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Jan 31 05:58:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 05:58:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 05:58:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 1 objects/s recovering
Jan 31 05:58:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 31 05:58:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 05:58:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 31 05:58:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 31 05:58:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 05:58:27 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 05:58:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 05:58:27 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 05:58:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 122 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=7 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=122 pruub=12.833190918s) [0] r=-1 lpr=122 pi=[100,122)/1 crt=63'1508 active pruub 229.880844116s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:27 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 122 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=7 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=122 pruub=12.833029747s) [0] r=-1 lpr=122 pi=[100,122)/1 crt=63'1508 unknown NOTIFY pruub 229.880844116s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:27 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 122 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=122) [0] r=0 lpr=122 pi=[100,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:28 compute-0 ceph-mon[75251]: 4.d scrub starts
Jan 31 05:58:28 compute-0 ceph-mon[75251]: 4.d scrub ok
Jan 31 05:58:28 compute-0 ceph-mon[75251]: 10.8 scrub starts
Jan 31 05:58:28 compute-0 ceph-mon[75251]: 10.8 scrub ok
Jan 31 05:58:28 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 05:58:28 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 05:58:28 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 05:58:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 05:58:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 05:58:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Jan 31 05:58:29 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 05:58:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 31 05:58:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 05:58:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[100,123)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:29 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[100,123)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 123 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=7 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=123) [0]/[2] r=0 lpr=123 pi=[100,123)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:29 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 123 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=100/101 n=7 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=123) [0]/[2] r=0 lpr=123 pi=[100,123)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:29 compute-0 ceph-mon[75251]: 11.14 scrub starts
Jan 31 05:58:29 compute-0 ceph-mon[75251]: 11.14 scrub ok
Jan 31 05:58:29 compute-0 ceph-mon[75251]: pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 1 objects/s recovering
Jan 31 05:58:29 compute-0 ceph-mon[75251]: 2.4 scrub starts
Jan 31 05:58:29 compute-0 ceph-mon[75251]: 2.4 scrub ok
Jan 31 05:58:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 05:58:29 compute-0 ceph-mon[75251]: osdmap e122: 3 total, 3 up, 3 in
Jan 31 05:58:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 05:58:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 05:58:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 05:58:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 05:58:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 05:58:30 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 05:58:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 05:58:31 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 05:58:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 31 05:58:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 05:58:31 compute-0 ceph-mon[75251]: 5.1d scrub starts
Jan 31 05:58:31 compute-0 ceph-mon[75251]: 5.1d scrub ok
Jan 31 05:58:31 compute-0 ceph-mon[75251]: pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Jan 31 05:58:31 compute-0 ceph-mon[75251]: osdmap e123: 3 total, 3 up, 3 in
Jan 31 05:58:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 05:58:31 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 05:58:31 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 124 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=123/124 n=7 ec=70/41 lis/c=100/100 les/c/f=101/101/0 sis=123) [0]/[2] async=[0] r=0 lpr=123 pi=[100,123)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:31 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 05:58:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 05:58:32 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 05:58:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 05:58:32 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 05:58:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 05:58:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 05:58:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 125 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=125 pruub=13.441715240s) [0] r=-1 lpr=125 pi=[90,125)/1 crt=63'1508 active pruub 234.868209839s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:32 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 125 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=125 pruub=13.441675186s) [0] r=-1 lpr=125 pi=[90,125)/1 crt=63'1508 unknown NOTIFY pruub 234.868209839s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:32 compute-0 ceph-mon[75251]: 5.c scrub starts
Jan 31 05:58:32 compute-0 ceph-mon[75251]: 5.c scrub ok
Jan 31 05:58:32 compute-0 ceph-mon[75251]: 11.10 scrub starts
Jan 31 05:58:32 compute-0 ceph-mon[75251]: 11.10 scrub ok
Jan 31 05:58:32 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 05:58:32 compute-0 ceph-mon[75251]: osdmap e124: 3 total, 3 up, 3 in
Jan 31 05:58:32 compute-0 ceph-mon[75251]: pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:32 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 05:58:32 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 05:58:32 compute-0 ceph-mon[75251]: osdmap e125: 3 total, 3 up, 3 in
Jan 31 05:58:32 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 125 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=125) [0] r=0 lpr=125 pi=[90,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 05:58:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 05:58:33 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 05:58:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 126 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=123/124 n=7 ec=70/41 lis/c=123/100 les/c/f=124/101/0 sis=126 pruub=14.659321785s) [0] async=[0] r=-1 lpr=126 pi=[100,126)/1 crt=63'1508 active pruub 236.908554077s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 126 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=123/124 n=7 ec=70/41 lis/c=123/100 les/c/f=124/101/0 sis=126 pruub=14.659239769s) [0] r=-1 lpr=126 pi=[100,126)/1 crt=63'1508 unknown NOTIFY pruub 236.908554077s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 126 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=126) [0]/[2] r=0 lpr=126 pi=[90,126)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:33 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 126 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=90/91 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=126) [0]/[2] r=0 lpr=126 pi=[90,126)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:33 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[90,126)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:33 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[90,126)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:33 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 126 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=123/100 les/c/f=124/101/0 sis=126) [0] r=0 lpr=126 pi=[100,126)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:33 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 126 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=123/100 les/c/f=124/101/0 sis=126) [0] r=0 lpr=126 pi=[100,126)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 1 activating+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9/282 objects misplaced (3.191%)
Jan 31 05:58:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 05:58:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 05:58:33 compute-0 ceph-mon[75251]: 10.4 scrub starts
Jan 31 05:58:33 compute-0 ceph-mon[75251]: 10.4 scrub ok
Jan 31 05:58:33 compute-0 ceph-mon[75251]: 5.19 scrub starts
Jan 31 05:58:33 compute-0 ceph-mon[75251]: 5.19 scrub ok
Jan 31 05:58:33 compute-0 ceph-mon[75251]: osdmap e126: 3 total, 3 up, 3 in
Jan 31 05:58:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 05:58:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 05:58:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 05:58:34 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 05:58:34 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 05:58:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 05:58:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 05:58:34 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 127 pg[9.1c( v 63'1508 (0'0,63'1508] local-lis/les=126/127 n=7 ec=70/41 lis/c=123/100 les/c/f=124/101/0 sis=126) [0] r=0 lpr=126 pi=[100,126)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:34 compute-0 ceph-mon[75251]: pgmap v261: 305 pgs: 1 activating+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9/282 objects misplaced (3.191%)
Jan 31 05:58:34 compute-0 ceph-mon[75251]: 5.f scrub starts
Jan 31 05:58:34 compute-0 ceph-mon[75251]: 5.f scrub ok
Jan 31 05:58:34 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 127 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=126/127 n=7 ec=70/41 lis/c=90/90 les/c/f=91/91/0 sis=126) [0]/[2] async=[0] r=0 lpr=126 pi=[90,126)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 activating+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9/282 objects misplaced (3.191%)
Jan 31 05:58:35 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 05:58:35 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 05:58:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 05:58:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 31 05:58:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 31 05:58:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 05:58:36 compute-0 ceph-mon[75251]: 10.7 scrub starts
Jan 31 05:58:36 compute-0 ceph-mon[75251]: 10.7 scrub ok
Jan 31 05:58:36 compute-0 ceph-mon[75251]: 5.1a scrub starts
Jan 31 05:58:36 compute-0 ceph-mon[75251]: 5.1a scrub ok
Jan 31 05:58:36 compute-0 ceph-mon[75251]: osdmap e127: 3 total, 3 up, 3 in
Jan 31 05:58:36 compute-0 ceph-mon[75251]: pgmap v263: 305 pgs: 1 activating+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9/282 objects misplaced (3.191%)
Jan 31 05:58:36 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 128 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=126/127 n=7 ec=70/41 lis/c=126/90 les/c/f=127/91/0 sis=128 pruub=14.036077499s) [0] async=[0] r=-1 lpr=128 pi=[90,128)/1 crt=63'1508 active pruub 239.963165283s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:36 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 128 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=126/127 n=7 ec=70/41 lis/c=126/90 les/c/f=127/91/0 sis=128 pruub=14.035931587s) [0] r=-1 lpr=128 pi=[90,128)/1 crt=63'1508 unknown NOTIFY pruub 239.963165283s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:36 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 05:58:36 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 128 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=126/90 les/c/f=127/91/0 sis=128) [0] r=0 lpr=128 pi=[90,128)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:36 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 128 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=126/90 les/c/f=127/91/0 sis=128) [0] r=0 lpr=128 pi=[90,128)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:37 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 05:58:37 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 05:58:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 3 objects/s recovering
Jan 31 05:58:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 05:58:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:58:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 05:58:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 05:58:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 05:58:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 05:58:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 05:58:37 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:58:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 05:58:37 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 05:58:37 compute-0 ceph-mon[75251]: 11.16 scrub starts
Jan 31 05:58:37 compute-0 ceph-mon[75251]: 11.16 scrub ok
Jan 31 05:58:37 compute-0 ceph-mon[75251]: 8.9 scrub starts
Jan 31 05:58:37 compute-0 ceph-mon[75251]: 8.9 scrub ok
Jan 31 05:58:37 compute-0 ceph-mon[75251]: osdmap e128: 3 total, 3 up, 3 in
Jan 31 05:58:37 compute-0 ceph-mon[75251]: pgmap v265: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 3 objects/s recovering
Jan 31 05:58:37 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 05:58:37 compute-0 ceph-osd[86016]: osd.0 pg_epoch: 129 pg[9.1e( v 63'1508 (0'0,63'1508] local-lis/les=128/129 n=7 ec=70/41 lis/c=126/90 les/c/f=127/91/0 sis=128) [0] r=0 lpr=128 pi=[90,128)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:37 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 129 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=129 pruub=14.298910141s) [1] r=-1 lpr=129 pi=[89,129)/1 crt=63'1508 active pruub 241.271484375s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:37 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 129 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=129 pruub=14.298858643s) [1] r=-1 lpr=129 pi=[89,129)/1 crt=63'1508 unknown NOTIFY pruub 241.271484375s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:37 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 129 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=129) [1] r=0 lpr=129 pi=[89,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 05:58:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 05:58:38 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 130 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1]/[2] r=0 lpr=130 pi=[89,130)/1 crt=63'1508 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:38 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 130 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=89/90 n=7 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1]/[2] r=0 lpr=130 pi=[89,130)/1 crt=63'1508 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:38 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 05:58:38 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1]/[2] r=-1 lpr=130 pi=[89,130)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:38 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1]/[2] r=-1 lpr=130 pi=[89,130)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:38 compute-0 sudo[101656]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 4.18 scrub starts
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 4.18 scrub ok
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 8.16 scrub starts
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 8.16 scrub ok
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 10.17 scrub starts
Jan 31 05:58:38 compute-0 ceph-mon[75251]: 10.17 scrub ok
Jan 31 05:58:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 05:58:38 compute-0 ceph-mon[75251]: osdmap e129: 3 total, 3 up, 3 in
Jan 31 05:58:38 compute-0 ceph-mon[75251]: osdmap e130: 3 total, 3 up, 3 in
Jan 31 05:58:38 compute-0 sudo[102571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvjnvedodtgbltsvodgicfoetchedbqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839118.6796794-132-189045020157469/AnsiballZ_command.py'
Jan 31 05:58:38 compute-0 sudo[102571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 05:58:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 05:58:39 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 05:58:39 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 131 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=130/131 n=7 ec=70/41 lis/c=89/89 les/c/f=90/90/0 sis=130) [1]/[2] async=[1] r=0 lpr=130 pi=[89,130)/1 crt=63'1508 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 142 B/s, 3 objects/s recovering
Jan 31 05:58:39 compute-0 python3.9[102573]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:58:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 05:58:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 05:58:40 compute-0 ceph-mon[75251]: osdmap e131: 3 total, 3 up, 3 in
Jan 31 05:58:40 compute-0 ceph-mon[75251]: pgmap v269: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 142 B/s, 3 objects/s recovering
Jan 31 05:58:40 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 05:58:40 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 132 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=130/131 n=7 ec=70/41 lis/c=130/89 les/c/f=131/90/0 sis=132 pruub=14.989972115s) [1] async=[1] r=-1 lpr=132 pi=[89,132)/1 crt=63'1508 active pruub 244.195098877s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:40 compute-0 ceph-osd[88127]: osd.2 pg_epoch: 132 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=130/131 n=7 ec=70/41 lis/c=130/89 les/c/f=131/90/0 sis=132 pruub=14.989845276s) [1] r=-1 lpr=132 pi=[89,132)/1 crt=63'1508 unknown NOTIFY pruub 244.195098877s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 05:58:40 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 132 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=130/89 les/c/f=131/90/0 sis=132) [1] r=0 lpr=132 pi=[89,132)/1 pct=0'0 crt=63'1508 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 05:58:40 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 132 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=0/0 n=7 ec=70/41 lis/c=130/89 les/c/f=131/90/0 sis=132) [1] r=0 lpr=132 pi=[89,132)/1 crt=63'1508 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 05:58:40 compute-0 sudo[102571]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 05:58:40 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 05:58:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 05:58:41 compute-0 ceph-mon[75251]: osdmap e132: 3 total, 3 up, 3 in
Jan 31 05:58:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 05:58:41 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 05:58:41 compute-0 ceph-osd[87070]: osd.1 pg_epoch: 133 pg[9.1f( v 63'1508 (0'0,63'1508] local-lis/les=132/133 n=7 ec=70/41 lis/c=130/89 les/c/f=131/90/0 sis=132) [1] r=0 lpr=132 pi=[89,132)/1 crt=63'1508 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 05:58:41 compute-0 sudo[102858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydwyeepiofoaptllforpfkiukaoieou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839120.4652252-140-196787564203188/AnsiballZ_selinux.py'
Jan 31 05:58:41 compute-0 sudo[102858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 05:58:41 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 05:58:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:41 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 05:58:41 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 05:58:41 compute-0 python3.9[102860]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 05:58:41 compute-0 sudo[102858]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:42 compute-0 sudo[103010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydgwyorgbqbdftvtmhoxfcbgnfxufwtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839121.7843034-151-258767657518373/AnsiballZ_command.py'
Jan 31 05:58:42 compute-0 sudo[103010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:42 compute-0 ceph-mon[75251]: 11.6 scrub starts
Jan 31 05:58:42 compute-0 ceph-mon[75251]: 11.6 scrub ok
Jan 31 05:58:42 compute-0 ceph-mon[75251]: osdmap e133: 3 total, 3 up, 3 in
Jan 31 05:58:42 compute-0 ceph-mon[75251]: 4.e scrub starts
Jan 31 05:58:42 compute-0 ceph-mon[75251]: 4.e scrub ok
Jan 31 05:58:42 compute-0 ceph-mon[75251]: pgmap v272: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:42 compute-0 python3.9[103012]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 05:58:42 compute-0 sudo[103010]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:42 compute-0 sudo[103162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-honxmrpjhqviiwrbznjvauzcgjjpvotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839122.4896545-159-239908115394303/AnsiballZ_file.py'
Jan 31 05:58:42 compute-0 sudo[103162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:42 compute-0 python3.9[103164]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:43 compute-0 sudo[103162]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:43 compute-0 ceph-mon[75251]: 8.17 scrub starts
Jan 31 05:58:43 compute-0 ceph-mon[75251]: 8.17 scrub ok
Jan 31 05:58:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 05:58:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 05:58:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 31 05:58:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 31 05:58:43 compute-0 sudo[103314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efbbwbllslyxwskoxaujpgtatmcfzwmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839123.1601822-167-103805117387944/AnsiballZ_mount.py'
Jan 31 05:58:43 compute-0 sudo[103314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:43 compute-0 python3.9[103316]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 05:58:43 compute-0 sudo[103314]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:44 compute-0 ceph-mon[75251]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:44 compute-0 ceph-mon[75251]: 4.1a scrub starts
Jan 31 05:58:44 compute-0 ceph-mon[75251]: 4.1a scrub ok
Jan 31 05:58:44 compute-0 ceph-mon[75251]: 11.13 scrub starts
Jan 31 05:58:44 compute-0 ceph-mon[75251]: 11.13 scrub ok
Jan 31 05:58:44 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 05:58:44 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 05:58:44 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 05:58:44 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 05:58:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:58:44
Jan 31 05:58:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:58:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:58:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'images', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Jan 31 05:58:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:58:44 compute-0 sudo[103466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziwyxrdhfooqdhizmwzntbyqotijunbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839124.6412625-195-248804793526056/AnsiballZ_file.py'
Jan 31 05:58:44 compute-0 sudo[103466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:45 compute-0 python3.9[103468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:58:45 compute-0 sudo[103466]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:45 compute-0 ceph-mon[75251]: 7.1a scrub starts
Jan 31 05:58:45 compute-0 ceph-mon[75251]: 7.1a scrub ok
Jan 31 05:58:45 compute-0 ceph-mon[75251]: 8.1 scrub starts
Jan 31 05:58:45 compute-0 ceph-mon[75251]: 8.1 scrub ok
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:58:45 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 05:58:45 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:58:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:58:45 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 05:58:45 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 05:58:45 compute-0 sudo[103618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjxruneqrsblcctqordtktuwvetwaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839125.301161-203-56508284569033/AnsiballZ_stat.py'
Jan 31 05:58:45 compute-0 sudo[103618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:45 compute-0 python3.9[103620]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:58:45 compute-0 sudo[103618]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:45 compute-0 sudo[103696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczmutodemarkvytocptgfuynwpxjgyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839125.301161-203-56508284569033/AnsiballZ_file.py'
Jan 31 05:58:45 compute-0 sudo[103696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:46 compute-0 python3.9[103698]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:58:46 compute-0 sudo[103696]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:46 compute-0 ceph-mon[75251]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Jan 31 05:58:46 compute-0 ceph-mon[75251]: 10.1b scrub starts
Jan 31 05:58:46 compute-0 ceph-mon[75251]: 10.1b scrub ok
Jan 31 05:58:46 compute-0 ceph-mon[75251]: 6.c scrub starts
Jan 31 05:58:46 compute-0 ceph-mon[75251]: 6.c scrub ok
Jan 31 05:58:46 compute-0 sudo[103848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupatwpcmgydrzsexcjifjtnehshzrve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839126.5577426-224-148554844304051/AnsiballZ_stat.py'
Jan 31 05:58:46 compute-0 sudo[103848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:46 compute-0 python3.9[103850]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:58:47 compute-0 sudo[103848]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 05:58:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 05:58:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 05:58:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 31 05:58:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 31 05:58:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 05:58:47 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 05:58:47 compute-0 sudo[104002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgosawrkswsnllrngvungdivctkmrhkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839127.5245345-237-146767342900652/AnsiballZ_getent.py'
Jan 31 05:58:47 compute-0 sudo[104002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:48 compute-0 python3.9[104004]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 05:58:48 compute-0 sudo[104002]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:48 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 05:58:48 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 05:58:48 compute-0 ceph-mon[75251]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 10.a scrub starts
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 10.a scrub ok
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 11.0 scrub starts
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 11.0 scrub ok
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 11.e scrub starts
Jan 31 05:58:48 compute-0 ceph-mon[75251]: 11.e scrub ok
Jan 31 05:58:48 compute-0 sudo[104155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbtmhslzwwfosmlvmmmgorqtiurorsqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839128.4537802-247-63488097961054/AnsiballZ_getent.py'
Jan 31 05:58:48 compute-0 sudo[104155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:48 compute-0 python3.9[104157]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 05:58:48 compute-0 sudo[104155]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 05:58:49 compute-0 sudo[104308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etncgybobnxulxaytmbmwejbxetyqnpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839129.17607-255-93408960555261/AnsiballZ_group.py'
Jan 31 05:58:49 compute-0 sudo[104308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:49 compute-0 ceph-mon[75251]: 11.f scrub starts
Jan 31 05:58:49 compute-0 ceph-mon[75251]: 11.f scrub ok
Jan 31 05:58:49 compute-0 ceph-mon[75251]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 05:58:49 compute-0 python3.9[104310]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 05:58:49 compute-0 sudo[104308]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 05:58:50 compute-0 sudo[104460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxkpmtixjmhjbrwpfvurfkumibuydin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839130.1435902-264-111791774103614/AnsiballZ_file.py'
Jan 31 05:58:50 compute-0 sudo[104460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:50 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 05:58:50 compute-0 python3.9[104462]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 05:58:50 compute-0 sudo[104460]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 05:58:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 05:58:51 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 05:58:51 compute-0 sudo[104612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodxsytzspzqshkjubxehztrcwncsxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839131.0637949-275-52047756975529/AnsiballZ_dnf.py'
Jan 31 05:58:51 compute-0 sudo[104612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:51 compute-0 ceph-mon[75251]: 6.d scrub starts
Jan 31 05:58:51 compute-0 ceph-mon[75251]: 6.d scrub ok
Jan 31 05:58:51 compute-0 python3.9[104614]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:58:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 05:58:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 05:58:52 compute-0 ceph-mon[75251]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 05:58:52 compute-0 ceph-mon[75251]: 10.1f scrub starts
Jan 31 05:58:52 compute-0 ceph-mon[75251]: 10.1f scrub ok
Jan 31 05:58:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:53 compute-0 sudo[104612]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 05:58:53 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 05:58:53 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 05:58:53 compute-0 sudo[104765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdxpiftaellkoegsbuhyefjiicrqcgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839133.296892-283-111831486472208/AnsiballZ_file.py'
Jan 31 05:58:53 compute-0 sudo[104765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:53 compute-0 ceph-mon[75251]: 10.1d scrub starts
Jan 31 05:58:53 compute-0 ceph-mon[75251]: 10.1d scrub ok
Jan 31 05:58:53 compute-0 python3.9[104767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:58:53 compute-0 sudo[104765]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:54 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 05:58:54 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 05:58:54 compute-0 sudo[104917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusnghypvnofbvrfiohyawiowewmubih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839134.0377085-291-118628984692069/AnsiballZ_stat.py'
Jan 31 05:58:54 compute-0 sudo[104917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:54 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 05:58:54 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 05:58:54 compute-0 python3.9[104919]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:58:54 compute-0 sudo[104917]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:54 compute-0 ceph-mon[75251]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 05:58:54 compute-0 ceph-mon[75251]: 8.e scrub starts
Jan 31 05:58:54 compute-0 ceph-mon[75251]: 8.e scrub ok
Jan 31 05:58:54 compute-0 sudo[104995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslzymddkddwrcuzwsvstoniqixzzvwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839134.0377085-291-118628984692069/AnsiballZ_file.py'
Jan 31 05:58:54 compute-0 sudo[104995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:55 compute-0 python3.9[104997]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:58:55 compute-0 sudo[104995]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:55 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 05:58:55 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 05:58:55 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 05:58:55 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 05:58:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 05:58:55 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:58:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:58:55 compute-0 sudo[105147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulshxyuiqlgwogbeclkfopjuvubgzlza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839135.3543696-304-224233928255463/AnsiballZ_stat.py'
Jan 31 05:58:55 compute-0 sudo[105147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 10.1c scrub starts
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 10.1c scrub ok
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 8.3 scrub starts
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 8.3 scrub ok
Jan 31 05:58:55 compute-0 ceph-mon[75251]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 8.c scrub starts
Jan 31 05:58:55 compute-0 ceph-mon[75251]: 8.c scrub ok
Jan 31 05:58:55 compute-0 python3.9[105149]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:58:55 compute-0 sudo[105147]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:56 compute-0 sudo[105225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjipwfdqysxmyqtanweespisabjawpda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839135.3543696-304-224233928255463/AnsiballZ_file.py'
Jan 31 05:58:56 compute-0 sudo[105225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:56 compute-0 python3.9[105227]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:58:56 compute-0 sudo[105225]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:56 compute-0 sudo[105377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihbzowgxdyohichnfurgrdxevxeyzqvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839136.6026013-319-222935955640853/AnsiballZ_dnf.py'
Jan 31 05:58:56 compute-0 sudo[105377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:58:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 05:58:57 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 05:58:57 compute-0 python3.9[105379]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:58:57 compute-0 ceph-mon[75251]: 10.18 scrub starts
Jan 31 05:58:57 compute-0 ceph-mon[75251]: 10.18 scrub ok
Jan 31 05:58:57 compute-0 ceph-mon[75251]: 6.2 scrub starts
Jan 31 05:58:57 compute-0 ceph-mon[75251]: 6.2 scrub ok
Jan 31 05:58:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:58:58 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 05:58:58 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 05:58:58 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 05:58:58 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 05:58:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 05:58:58 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 05:58:58 compute-0 sudo[105377]: pam_unix(sudo:session): session closed for user root
Jan 31 05:58:58 compute-0 ceph-mon[75251]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:58 compute-0 ceph-mon[75251]: 10.5 scrub starts
Jan 31 05:58:58 compute-0 ceph-mon[75251]: 10.5 scrub ok
Jan 31 05:58:59 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 05:58:59 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 05:58:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:58:59 compute-0 python3.9[105530]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 6.8 scrub starts
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 6.8 scrub ok
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 6.6 scrub starts
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 6.6 scrub ok
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 11.1 scrub starts
Jan 31 05:58:59 compute-0 ceph-mon[75251]: 11.1 scrub ok
Jan 31 05:59:00 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 31 05:59:00 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 31 05:59:00 compute-0 python3.9[105682]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 05:59:00 compute-0 python3.9[105832]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:59:00 compute-0 ceph-mon[75251]: 10.c scrub starts
Jan 31 05:59:00 compute-0 ceph-mon[75251]: 10.c scrub ok
Jan 31 05:59:00 compute-0 ceph-mon[75251]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:01 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 05:59:01 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 05:59:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:01 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 05:59:01 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 05:59:01 compute-0 sudo[105982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjuexffqzvxiyqouixbqqxdikpolhvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839141.2906291-360-250373053211645/AnsiballZ_systemd.py'
Jan 31 05:59:01 compute-0 sudo[105982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 05:59:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 05:59:02 compute-0 python3.9[105984]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:59:02 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 05:59:02 compute-0 ceph-mon[75251]: 10.0 scrub starts
Jan 31 05:59:02 compute-0 ceph-mon[75251]: 10.0 scrub ok
Jan 31 05:59:02 compute-0 ceph-mon[75251]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:02 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 05:59:02 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 05:59:02 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 05:59:02 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 05:59:02 compute-0 sudo[105982]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:03 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 05:59:03 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 05:59:03 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 31 05:59:03 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 31 05:59:03 compute-0 python3.9[106145]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 10.3 scrub starts
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 10.3 scrub ok
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 8.8 scrub starts
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 8.8 scrub ok
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 11.c scrub starts
Jan 31 05:59:03 compute-0 ceph-mon[75251]: 11.c scrub ok
Jan 31 05:59:04 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 31 05:59:04 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 31 05:59:04 compute-0 ceph-mon[75251]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 11.1a scrub starts
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 11.1a scrub ok
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 6.1 scrub starts
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 6.1 scrub ok
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 8.18 scrub starts
Jan 31 05:59:04 compute-0 ceph-mon[75251]: 8.18 scrub ok
Jan 31 05:59:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:05 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 05:59:05 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 05:59:05 compute-0 ceph-mon[75251]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:05 compute-0 ceph-mon[75251]: 8.1a scrub starts
Jan 31 05:59:05 compute-0 ceph-mon[75251]: 8.1a scrub ok
Jan 31 05:59:05 compute-0 sudo[106295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrxolqjurvhqzrydaljlfedmzhlmyid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839145.6652796-417-243587346503054/AnsiballZ_systemd.py'
Jan 31 05:59:05 compute-0 sudo[106295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:06 compute-0 python3.9[106297]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:59:06 compute-0 sudo[106295]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:06 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 05:59:06 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 05:59:06 compute-0 sudo[106449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjobpfbxkumqtgungeqkonafyhkgpaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839146.4293578-417-47879347110754/AnsiballZ_systemd.py'
Jan 31 05:59:06 compute-0 sudo[106449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:07 compute-0 python3.9[106451]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 05:59:07 compute-0 systemd[76640]: Created slice User Background Tasks Slice.
Jan 31 05:59:07 compute-0 systemd[76640]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 05:59:07 compute-0 systemd[76640]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 05:59:07 compute-0 sudo[106449]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:07 compute-0 ceph-mon[75251]: 11.a scrub starts
Jan 31 05:59:07 compute-0 ceph-mon[75251]: 11.a scrub ok
Jan 31 05:59:07 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 05:59:07 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 05:59:07 compute-0 sshd-session[99744]: Connection closed by 192.168.122.30 port 46326
Jan 31 05:59:07 compute-0 sshd-session[99741]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:59:07 compute-0 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Jan 31 05:59:07 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 05:59:07 compute-0 systemd[1]: session-34.scope: Consumed 1min 756ms CPU time.
Jan 31 05:59:07 compute-0 systemd-logind[797]: Removed session 34.
Jan 31 05:59:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 05:59:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 05:59:08 compute-0 ceph-mon[75251]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:08 compute-0 ceph-mon[75251]: 10.1e scrub starts
Jan 31 05:59:08 compute-0 ceph-mon[75251]: 10.1e scrub ok
Jan 31 05:59:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:09 compute-0 ceph-mon[75251]: 8.1c scrub starts
Jan 31 05:59:09 compute-0 ceph-mon[75251]: 8.1c scrub ok
Jan 31 05:59:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 05:59:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 05:59:10 compute-0 ceph-mon[75251]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:11 compute-0 ceph-mon[75251]: 8.1b scrub starts
Jan 31 05:59:11 compute-0 ceph-mon[75251]: 8.1b scrub ok
Jan 31 05:59:11 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 31 05:59:11 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 31 05:59:12 compute-0 ceph-mon[75251]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:12 compute-0 ceph-mon[75251]: 8.0 scrub starts
Jan 31 05:59:12 compute-0 ceph-mon[75251]: 8.0 scrub ok
Jan 31 05:59:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 05:59:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 05:59:13 compute-0 sshd-session[106479]: Accepted publickey for zuul from 192.168.122.30 port 49542 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:59:13 compute-0 systemd-logind[797]: New session 35 of user zuul.
Jan 31 05:59:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 05:59:13 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 31 05:59:13 compute-0 sshd-session[106479]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:59:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 05:59:13 compute-0 ceph-mon[75251]: pgmap v288: 305 pgs: 305 active+clean; 460 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:13 compute-0 ceph-mon[75251]: 11.19 scrub starts
Jan 31 05:59:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 05:59:14 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 05:59:14 compute-0 python3.9[106632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:14 compute-0 ceph-mon[75251]: 11.b scrub starts
Jan 31 05:59:14 compute-0 ceph-mon[75251]: 11.b scrub ok
Jan 31 05:59:14 compute-0 ceph-mon[75251]: 11.19 scrub ok
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:15 compute-0 sudo[106786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uztxngesmdhkcuxigcyrkdqnntllagjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839155.0469217-31-125231095506977/AnsiballZ_getent.py'
Jan 31 05:59:15 compute-0 sudo[106786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 05:59:15 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 05:59:15 compute-0 python3.9[106788]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 05:59:15 compute-0 sudo[106786]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:15 compute-0 ceph-mon[75251]: 6.e scrub starts
Jan 31 05:59:15 compute-0 ceph-mon[75251]: 6.e scrub ok
Jan 31 05:59:15 compute-0 ceph-mon[75251]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:16 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 05:59:16 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 05:59:16 compute-0 sudo[106939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvulmramjodbpmbffaeuzdqgkudchuol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839156.0331488-43-260568903881298/AnsiballZ_setup.py'
Jan 31 05:59:16 compute-0 sudo[106939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:16 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 05:59:16 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 05:59:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 05:59:16 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 05:59:16 compute-0 python3.9[106941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:59:16 compute-0 sudo[106939]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:16 compute-0 ceph-mon[75251]: 8.7 scrub starts
Jan 31 05:59:16 compute-0 ceph-mon[75251]: 8.7 scrub ok
Jan 31 05:59:16 compute-0 ceph-mon[75251]: 11.17 scrub starts
Jan 31 05:59:16 compute-0 ceph-mon[75251]: 11.17 scrub ok
Jan 31 05:59:17 compute-0 sudo[107023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfpkwpvnwiprlsewgrjamjwmpbhirpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839156.0331488-43-260568903881298/AnsiballZ_dnf.py'
Jan 31 05:59:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:17 compute-0 sudo[107023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:17 compute-0 python3.9[107025]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 05:59:17 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 05:59:17 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 05:59:17 compute-0 ceph-mon[75251]: 11.1f scrub starts
Jan 31 05:59:17 compute-0 ceph-mon[75251]: 11.1f scrub ok
Jan 31 05:59:17 compute-0 ceph-mon[75251]: 11.5 scrub starts
Jan 31 05:59:17 compute-0 ceph-mon[75251]: 11.5 scrub ok
Jan 31 05:59:17 compute-0 ceph-mon[75251]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 05:59:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 05:59:18 compute-0 sudo[107023]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:19 compute-0 sudo[107074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:59:19 compute-0 sudo[107074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:19 compute-0 sudo[107074]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:19 compute-0 ceph-mon[75251]: 8.14 scrub starts
Jan 31 05:59:19 compute-0 ceph-mon[75251]: 8.14 scrub ok
Jan 31 05:59:19 compute-0 ceph-mon[75251]: 10.16 scrub starts
Jan 31 05:59:19 compute-0 ceph-mon[75251]: 10.16 scrub ok
Jan 31 05:59:19 compute-0 sudo[107128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 05:59:19 compute-0 sudo[107128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:19 compute-0 sudo[107226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydqvnqqmkxifuddqzffdpeyviiocxkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839159.0447323-57-39525887363072/AnsiballZ_dnf.py'
Jan 31 05:59:19 compute-0 sudo[107226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:19 compute-0 python3.9[107228]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:59:19 compute-0 sudo[107128]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:59:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 05:59:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:59:19 compute-0 sudo[107260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:59:19 compute-0 sudo[107260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:19 compute-0 sudo[107260]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:19 compute-0 sudo[107285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 05:59:19 compute-0 sudo[107285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:20 compute-0 ceph-mon[75251]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 05:59:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.247412066 +0000 UTC m=+0.081425922 container create 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.19970665 +0000 UTC m=+0.033720566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:20 compute-0 systemd[1]: Started libpod-conmon-864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab.scope.
Jan 31 05:59:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.389845208 +0000 UTC m=+0.223859034 container init 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.398219173 +0000 UTC m=+0.232232999 container start 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 05:59:20 compute-0 distracted_euclid[107338]: 167 167
Jan 31 05:59:20 compute-0 systemd[1]: libpod-864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab.scope: Deactivated successfully.
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.438209523 +0000 UTC m=+0.272223369 container attach 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.439024866 +0000 UTC m=+0.273038742 container died 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 05:59:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ae97eec526913572a2aea11742480f1ac83871dc21c064ceff33113c371c0d-merged.mount: Deactivated successfully.
Jan 31 05:59:20 compute-0 sudo[107226]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:20 compute-0 podman[107321]: 2026-01-31 05:59:20.809805238 +0000 UTC m=+0.643819054 container remove 864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 05:59:20 compute-0 systemd[1]: libpod-conmon-864e861bc589eb303d78f8c0b3f1c096b46105686bfc8aef66db83790bcdd1ab.scope: Deactivated successfully.
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:20.966065817 +0000 UTC m=+0.026118813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:21.098931661 +0000 UTC m=+0.158984647 container create 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:59:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:21 compute-0 systemd[1]: Started libpod-conmon-3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523.scope.
Jan 31 05:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:21.332896348 +0000 UTC m=+0.392949404 container init 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:21.342975721 +0000 UTC m=+0.403028737 container start 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:21.357770916 +0000 UTC m=+0.417823932 container attach 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 05:59:21 compute-0 sudo[107533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scfieeceetlmwgdgmbmtqzbxjvgqlgjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839160.9804618-65-235903051044116/AnsiballZ_systemd.py'
Jan 31 05:59:21 compute-0 sudo[107533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:21 compute-0 gallant_jackson[107453]: --> passed data devices: 0 physical, 3 LVM
Jan 31 05:59:21 compute-0 gallant_jackson[107453]: --> All data devices are unavailable
Jan 31 05:59:21 compute-0 systemd[1]: libpod-3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523.scope: Deactivated successfully.
Jan 31 05:59:21 compute-0 podman[107385]: 2026-01-31 05:59:21.827769088 +0000 UTC m=+0.887822064 container died 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:59:21 compute-0 python3.9[107535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 05:59:21 compute-0 sudo[107533]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-64ba6d7f9e4a85ea9699d20ea5c9ecac1758dfd4d29782dc850a12d32ef465dc-merged.mount: Deactivated successfully.
Jan 31 05:59:22 compute-0 ceph-mon[75251]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:22 compute-0 python3.9[107714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:23 compute-0 podman[107385]: 2026-01-31 05:59:23.135822728 +0000 UTC m=+2.195875744 container remove 3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 05:59:23 compute-0 systemd[1]: libpod-conmon-3be1ab28ae10b91e705263c91ff7d1d97a7061bf2165f01b00857b40f9ac9523.scope: Deactivated successfully.
Jan 31 05:59:23 compute-0 sudo[107285]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:23 compute-0 sudo[107777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:59:23 compute-0 sudo[107777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:23 compute-0 sudo[107777]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:23 compute-0 sudo[107816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 05:59:23 compute-0 sudo[107816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:23 compute-0 sudo[107941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyisodxqkupdigeeknpcojxwsgkudti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839163.167901-83-63739178395196/AnsiballZ_sefcontext.py'
Jan 31 05:59:23 compute-0 sudo[107941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.546849988 +0000 UTC m=+0.048674915 container create 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 05:59:23 compute-0 systemd[1]: Started libpod-conmon-0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970.scope.
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.517515406 +0000 UTC m=+0.019340383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.637529489 +0000 UTC m=+0.139354466 container init 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.643181128 +0000 UTC m=+0.145006015 container start 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.64717628 +0000 UTC m=+0.149001187 container attach 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 05:59:23 compute-0 eloquent_zhukovsky[107946]: 167 167
Jan 31 05:59:23 compute-0 systemd[1]: libpod-0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970.scope: Deactivated successfully.
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.649263438 +0000 UTC m=+0.151088365 container died 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 05:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe4b9043764d7e6f500fef96a02cdb76203cf1542b5c9583a9100e000e80451e-merged.mount: Deactivated successfully.
Jan 31 05:59:23 compute-0 podman[107901]: 2026-01-31 05:59:23.742919213 +0000 UTC m=+0.244744100 container remove 0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:59:23 compute-0 systemd[1]: libpod-conmon-0dcfe421ea8d730d04b29cea3e99a63b73c83c42bddcd63f27be6188e342d970.scope: Deactivated successfully.
Jan 31 05:59:23 compute-0 ceph-mon[75251]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:23 compute-0 python3.9[107943]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 05:59:23 compute-0 podman[107970]: 2026-01-31 05:59:23.908205534 +0000 UTC m=+0.061227997 container create 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 05:59:23 compute-0 systemd[1]: Started libpod-conmon-65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967.scope.
Jan 31 05:59:23 compute-0 podman[107970]: 2026-01-31 05:59:23.86771204 +0000 UTC m=+0.020734573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:23 compute-0 sudo[107941]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f62c855f7f552a6919c1c829d67b2c0d26c159c23faa5bf08ae4c6a42f6b9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f62c855f7f552a6919c1c829d67b2c0d26c159c23faa5bf08ae4c6a42f6b9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f62c855f7f552a6919c1c829d67b2c0d26c159c23faa5bf08ae4c6a42f6b9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9f62c855f7f552a6919c1c829d67b2c0d26c159c23faa5bf08ae4c6a42f6b9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:24 compute-0 podman[107970]: 2026-01-31 05:59:24.02472546 +0000 UTC m=+0.177747923 container init 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 05:59:24 compute-0 podman[107970]: 2026-01-31 05:59:24.030877452 +0000 UTC m=+0.183899905 container start 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 05:59:24 compute-0 podman[107970]: 2026-01-31 05:59:24.061679286 +0000 UTC m=+0.214701739 container attach 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]: {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     "0": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "devices": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "/dev/loop3"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             ],
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_name": "ceph_lv0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_size": "21470642176",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "name": "ceph_lv0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "tags": {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_name": "ceph",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.crush_device_class": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.encrypted": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.objectstore": "bluestore",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_id": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.vdo": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.with_tpm": "0"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             },
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "vg_name": "ceph_vg0"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         }
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     ],
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     "1": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "devices": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "/dev/loop4"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             ],
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_name": "ceph_lv1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_size": "21470642176",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "name": "ceph_lv1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "tags": {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_name": "ceph",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.crush_device_class": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.encrypted": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.objectstore": "bluestore",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_id": "1",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.vdo": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.with_tpm": "0"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             },
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "vg_name": "ceph_vg1"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         }
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     ],
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     "2": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "devices": [
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "/dev/loop5"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             ],
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_name": "ceph_lv2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_size": "21470642176",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "name": "ceph_lv2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "tags": {
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.cluster_name": "ceph",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.crush_device_class": "",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.encrypted": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.objectstore": "bluestore",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osd_id": "2",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.vdo": "0",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:                 "ceph.with_tpm": "0"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             },
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "type": "block",
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:             "vg_name": "ceph_vg2"
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:         }
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]:     ]
Jan 31 05:59:24 compute-0 suspicious_northcutt[107987]: }
Jan 31 05:59:24 compute-0 systemd[1]: libpod-65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967.scope: Deactivated successfully.
Jan 31 05:59:24 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 05:59:24 compute-0 conmon[107987]: conmon 65aead5aecbe59b4dd30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967.scope/container/memory.events
Jan 31 05:59:24 compute-0 podman[107970]: 2026-01-31 05:59:24.321034075 +0000 UTC m=+0.474056528 container died 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 05:59:24 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 05:59:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9f62c855f7f552a6919c1c829d67b2c0d26c159c23faa5bf08ae4c6a42f6b9e-merged.mount: Deactivated successfully.
Jan 31 05:59:24 compute-0 podman[107970]: 2026-01-31 05:59:24.403744443 +0000 UTC m=+0.556766896 container remove 65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 31 05:59:24 compute-0 systemd[1]: libpod-conmon-65aead5aecbe59b4dd30947039a86149f0cc287362a0109eb0a9ff81fc8c4967.scope: Deactivated successfully.
Jan 31 05:59:24 compute-0 sudo[107816]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:24 compute-0 sudo[108130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 05:59:24 compute-0 sudo[108130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:24 compute-0 sudo[108130]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:24 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 05:59:24 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 05:59:24 compute-0 sudo[108170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 05:59:24 compute-0 sudo[108170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:24 compute-0 python3.9[108191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.850032 +0000 UTC m=+0.110727074 container create 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 05:59:24 compute-0 ceph-mon[75251]: 10.1 scrub starts
Jan 31 05:59:24 compute-0 ceph-mon[75251]: 10.1 scrub ok
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.76082631 +0000 UTC m=+0.021521344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:24 compute-0 systemd[1]: Started libpod-conmon-660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064.scope.
Jan 31 05:59:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.924768335 +0000 UTC m=+0.185463369 container init 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.931425782 +0000 UTC m=+0.192120856 container start 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 05:59:24 compute-0 ecstatic_tesla[108243]: 167 167
Jan 31 05:59:24 compute-0 systemd[1]: libpod-660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064.scope: Deactivated successfully.
Jan 31 05:59:24 compute-0 conmon[108243]: conmon 660ecbda78c0846a1905 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064.scope/container/memory.events
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.954399806 +0000 UTC m=+0.215094860 container attach 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 05:59:24 compute-0 podman[108220]: 2026-01-31 05:59:24.956205126 +0000 UTC m=+0.216900180 container died 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:59:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c1125bba0b98e022aadb56879c6b84e9fad69e33f637b4f763b3c6ff7ddc853-merged.mount: Deactivated successfully.
Jan 31 05:59:25 compute-0 podman[108220]: 2026-01-31 05:59:25.063178394 +0000 UTC m=+0.323873448 container remove 660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 05:59:25 compute-0 systemd[1]: libpod-conmon-660ecbda78c0846a1905e1ae74a1280e91943a45300b831c9f2e7f934315a064.scope: Deactivated successfully.
Jan 31 05:59:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:25 compute-0 podman[108292]: 2026-01-31 05:59:25.240383501 +0000 UTC m=+0.063919983 container create 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 05:59:25 compute-0 podman[108292]: 2026-01-31 05:59:25.194403592 +0000 UTC m=+0.017940104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 05:59:25 compute-0 systemd[1]: Started libpod-conmon-03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b.scope.
Jan 31 05:59:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 05:59:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 05:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf694b87d6207c1d5162e3cb8c5a410016e5ef075f5ea71deb1211ff317b81fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf694b87d6207c1d5162e3cb8c5a410016e5ef075f5ea71deb1211ff317b81fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf694b87d6207c1d5162e3cb8c5a410016e5ef075f5ea71deb1211ff317b81fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf694b87d6207c1d5162e3cb8c5a410016e5ef075f5ea71deb1211ff317b81fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 05:59:25 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 05:59:25 compute-0 podman[108292]: 2026-01-31 05:59:25.33883397 +0000 UTC m=+0.162370482 container init 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 05:59:25 compute-0 podman[108292]: 2026-01-31 05:59:25.343431319 +0000 UTC m=+0.166967831 container start 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 05:59:25 compute-0 podman[108292]: 2026-01-31 05:59:25.346878085 +0000 UTC m=+0.170414577 container attach 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 05:59:25 compute-0 sudo[108439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylccmxcsztgzuewjgmpwqxqivipitzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839165.2663426-101-79977645139451/AnsiballZ_dnf.py'
Jan 31 05:59:25 compute-0 sudo[108439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 05:59:25 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 05:59:25 compute-0 python3.9[108442]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:59:25 compute-0 ceph-mon[75251]: 6.b scrub starts
Jan 31 05:59:25 compute-0 ceph-mon[75251]: 6.b scrub ok
Jan 31 05:59:25 compute-0 ceph-mon[75251]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:25 compute-0 ceph-mon[75251]: 8.1d scrub starts
Jan 31 05:59:25 compute-0 ceph-mon[75251]: 8.1d scrub ok
Jan 31 05:59:25 compute-0 lvm[108518]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 05:59:25 compute-0 lvm[108517]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 05:59:25 compute-0 lvm[108517]: VG ceph_vg0 finished
Jan 31 05:59:25 compute-0 lvm[108518]: VG ceph_vg1 finished
Jan 31 05:59:25 compute-0 lvm[108520]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 05:59:25 compute-0 lvm[108520]: VG ceph_vg2 finished
Jan 31 05:59:26 compute-0 gifted_babbage[108353]: {}
Jan 31 05:59:26 compute-0 systemd[1]: libpod-03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b.scope: Deactivated successfully.
Jan 31 05:59:26 compute-0 podman[108292]: 2026-01-31 05:59:26.046308078 +0000 UTC m=+0.869844620 container died 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 05:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf694b87d6207c1d5162e3cb8c5a410016e5ef075f5ea71deb1211ff317b81fc-merged.mount: Deactivated successfully.
Jan 31 05:59:26 compute-0 podman[108292]: 2026-01-31 05:59:26.1113254 +0000 UTC m=+0.934861932 container remove 03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 05:59:26 compute-0 systemd[1]: libpod-conmon-03892701ccb5aaea14fdf43f9c58336741f429e072b0188b2b2d064a2d0c6d1b.scope: Deactivated successfully.
Jan 31 05:59:26 compute-0 sudo[108170]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 05:59:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 05:59:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:26 compute-0 sudo[108535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 05:59:26 compute-0 sudo[108535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 05:59:26 compute-0 sudo[108535]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 05:59:26 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 05:59:26 compute-0 ceph-mon[75251]: 11.12 scrub starts
Jan 31 05:59:26 compute-0 ceph-mon[75251]: 11.12 scrub ok
Jan 31 05:59:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 05:59:26 compute-0 ceph-mon[75251]: 8.1f scrub starts
Jan 31 05:59:26 compute-0 ceph-mon[75251]: 8.1f scrub ok
Jan 31 05:59:26 compute-0 sudo[108439]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 05:59:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 05:59:27 compute-0 sudo[108709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqbcgaprbhmdggpjugzniruqyvggrfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839167.1223245-109-47942680050111/AnsiballZ_command.py'
Jan 31 05:59:27 compute-0 sudo[108709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 05:59:27 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 05:59:27 compute-0 python3.9[108711]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:59:27 compute-0 ceph-mon[75251]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:27 compute-0 ceph-mon[75251]: 11.4 scrub starts
Jan 31 05:59:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:28 compute-0 sudo[108709]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 05:59:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 05:59:28 compute-0 sudo[108996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvrharbvwmcjnwohsopaklfpgjbhqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839168.4560423-117-139525029159197/AnsiballZ_file.py'
Jan 31 05:59:28 compute-0 sudo[108996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:28 compute-0 ceph-mon[75251]: 8.5 scrub starts
Jan 31 05:59:28 compute-0 ceph-mon[75251]: 8.5 scrub ok
Jan 31 05:59:28 compute-0 ceph-mon[75251]: 11.4 scrub ok
Jan 31 05:59:29 compute-0 python3.9[108998]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 05:59:29 compute-0 sudo[108996]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 05:59:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 05:59:29 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 05:59:29 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 05:59:29 compute-0 python3.9[109148]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:59:29 compute-0 ceph-mon[75251]: 11.18 scrub starts
Jan 31 05:59:29 compute-0 ceph-mon[75251]: 11.18 scrub ok
Jan 31 05:59:29 compute-0 ceph-mon[75251]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:30 compute-0 sudo[109300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbvghlakxrenxswfzrtaqamnggezesfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839169.9836657-133-131644019042398/AnsiballZ_dnf.py'
Jan 31 05:59:30 compute-0 sudo[109300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 05:59:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 05:59:30 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 05:59:30 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 05:59:30 compute-0 python3.9[109302]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 11.7 scrub starts
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 11.7 scrub ok
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 10.9 scrub starts
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 10.9 scrub ok
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 8.12 scrub starts
Jan 31 05:59:31 compute-0 ceph-mon[75251]: 8.12 scrub ok
Jan 31 05:59:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:31 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 05:59:31 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 05:59:31 compute-0 sudo[109300]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:32 compute-0 ceph-mon[75251]: 8.19 scrub starts
Jan 31 05:59:32 compute-0 ceph-mon[75251]: 8.19 scrub ok
Jan 31 05:59:32 compute-0 ceph-mon[75251]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:32 compute-0 ceph-mon[75251]: 11.1e scrub starts
Jan 31 05:59:32 compute-0 ceph-mon[75251]: 11.1e scrub ok
Jan 31 05:59:32 compute-0 sudo[109453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aliiocatwlkknzyukrxutzqovboejiub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839171.812842-142-214159175787180/AnsiballZ_dnf.py'
Jan 31 05:59:32 compute-0 sudo[109453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:32 compute-0 python3.9[109455]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:59:32 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 05:59:32 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 05:59:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:33 compute-0 ceph-mon[75251]: 11.11 scrub starts
Jan 31 05:59:33 compute-0 ceph-mon[75251]: 11.11 scrub ok
Jan 31 05:59:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 05:59:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 05:59:33 compute-0 sudo[109453]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 05:59:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 05:59:34 compute-0 sudo[109606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldjytfqatvernqidgydhjprkgfiekxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839173.8102832-154-197178766784523/AnsiballZ_stat.py'
Jan 31 05:59:34 compute-0 sudo[109606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:34 compute-0 ceph-mon[75251]: pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:34 compute-0 ceph-mon[75251]: 10.15 scrub starts
Jan 31 05:59:34 compute-0 ceph-mon[75251]: 10.15 scrub ok
Jan 31 05:59:34 compute-0 python3.9[109608]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 05:59:34 compute-0 sudo[109606]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:34 compute-0 sudo[109760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kicuotekwwsnefetwiqigyanpqxqszgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839174.4439006-162-92833073637814/AnsiballZ_slurp.py'
Jan 31 05:59:34 compute-0 sudo[109760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:35 compute-0 python3.9[109762]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 05:59:35 compute-0 sudo[109760]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:35 compute-0 ceph-mon[75251]: 11.1d scrub starts
Jan 31 05:59:35 compute-0 ceph-mon[75251]: 11.1d scrub ok
Jan 31 05:59:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:35 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 05:59:35 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 05:59:35 compute-0 sshd-session[106482]: Connection closed by 192.168.122.30 port 49542
Jan 31 05:59:35 compute-0 sshd-session[106479]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:59:35 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 05:59:35 compute-0 systemd[1]: session-35.scope: Consumed 15.715s CPU time.
Jan 31 05:59:35 compute-0 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Jan 31 05:59:35 compute-0 systemd-logind[797]: Removed session 35.
Jan 31 05:59:36 compute-0 ceph-mon[75251]: pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:36 compute-0 ceph-mon[75251]: 11.1b scrub starts
Jan 31 05:59:36 compute-0 ceph-mon[75251]: 11.1b scrub ok
Jan 31 05:59:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 05:59:36 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 05:59:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 05:59:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:37 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 05:59:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:38 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 05:59:38 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 05:59:38 compute-0 ceph-mon[75251]: 8.6 scrub starts
Jan 31 05:59:38 compute-0 ceph-mon[75251]: 8.6 scrub ok
Jan 31 05:59:38 compute-0 ceph-mon[75251]: 8.1e scrub starts
Jan 31 05:59:38 compute-0 ceph-mon[75251]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:38 compute-0 ceph-mon[75251]: 8.1e scrub ok
Jan 31 05:59:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:39 compute-0 ceph-mon[75251]: 6.4 scrub starts
Jan 31 05:59:39 compute-0 ceph-mon[75251]: 6.4 scrub ok
Jan 31 05:59:40 compute-0 ceph-mon[75251]: pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:41 compute-0 sshd-session[109787]: Accepted publickey for zuul from 192.168.122.30 port 51862 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:59:41 compute-0 systemd-logind[797]: New session 36 of user zuul.
Jan 31 05:59:41 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 31 05:59:41 compute-0 sshd-session[109787]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:59:42 compute-0 python3.9[109940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:42 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 05:59:42 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 05:59:42 compute-0 ceph-mon[75251]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:43 compute-0 python3.9[110094]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:59:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 05:59:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 05:59:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 05:59:43 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 05:59:43 compute-0 ceph-mon[75251]: 8.a scrub starts
Jan 31 05:59:43 compute-0 ceph-mon[75251]: 8.a scrub ok
Jan 31 05:59:44 compute-0 python3.9[110287]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:59:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_05:59:44
Jan 31 05:59:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 05:59:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 05:59:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Jan 31 05:59:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 05:59:44 compute-0 ceph-mon[75251]: pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:44 compute-0 ceph-mon[75251]: 8.13 scrub starts
Jan 31 05:59:44 compute-0 ceph-mon[75251]: 8.13 scrub ok
Jan 31 05:59:44 compute-0 ceph-mon[75251]: 11.1c scrub starts
Jan 31 05:59:44 compute-0 ceph-mon[75251]: 11.1c scrub ok
Jan 31 05:59:44 compute-0 sshd-session[109790]: Connection closed by 192.168.122.30 port 51862
Jan 31 05:59:44 compute-0 sshd-session[109787]: pam_unix(sshd:session): session closed for user zuul
Jan 31 05:59:44 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 05:59:44 compute-0 systemd[1]: session-36.scope: Consumed 1.952s CPU time.
Jan 31 05:59:44 compute-0 systemd-logind[797]: Session 36 logged out. Waiting for processes to exit.
Jan 31 05:59:44 compute-0 systemd-logind[797]: Removed session 36.
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:59:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 05:59:45 compute-0 ceph-mon[75251]: pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:46 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 05:59:46 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 05:59:46 compute-0 ceph-mon[75251]: 8.d scrub starts
Jan 31 05:59:46 compute-0 ceph-mon[75251]: 8.d scrub ok
Jan 31 05:59:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 05:59:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 05:59:47 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 05:59:47 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 05:59:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:48 compute-0 ceph-mon[75251]: pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:48 compute-0 ceph-mon[75251]: 11.2 scrub starts
Jan 31 05:59:48 compute-0 ceph-mon[75251]: 11.2 scrub ok
Jan 31 05:59:48 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 05:59:48 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 05:59:49 compute-0 ceph-mon[75251]: 10.1a scrub starts
Jan 31 05:59:49 compute-0 ceph-mon[75251]: 10.1a scrub ok
Jan 31 05:59:49 compute-0 ceph-mon[75251]: 8.2 scrub starts
Jan 31 05:59:49 compute-0 ceph-mon[75251]: 8.2 scrub ok
Jan 31 05:59:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:49 compute-0 sshd-session[110313]: Accepted publickey for zuul from 192.168.122.30 port 53260 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 05:59:49 compute-0 systemd-logind[797]: New session 37 of user zuul.
Jan 31 05:59:49 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 31 05:59:49 compute-0 sshd-session[110313]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 05:59:50 compute-0 ceph-mon[75251]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:50 compute-0 python3.9[110466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:51 compute-0 python3.9[110620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 05:59:52 compute-0 sudo[110774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuocgysifwgkqlulyktfzqenxzjucosv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839191.9107077-35-202704176899637/AnsiballZ_setup.py'
Jan 31 05:59:52 compute-0 sudo[110774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:52 compute-0 ceph-mon[75251]: pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 05:59:52 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 05:59:52 compute-0 python3.9[110776]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:59:52 compute-0 sudo[110774]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:53 compute-0 sudo[110858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhndfusrgicaasxahdhsoqpyrzkrqgfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839191.9107077-35-202704176899637/AnsiballZ_dnf.py'
Jan 31 05:59:53 compute-0 sudo[110858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:53 compute-0 ceph-mon[75251]: 11.d scrub starts
Jan 31 05:59:53 compute-0 ceph-mon[75251]: 11.d scrub ok
Jan 31 05:59:53 compute-0 python3.9[110860]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 05:59:54 compute-0 ceph-mon[75251]: pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:54 compute-0 sudo[110858]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:55 compute-0 sudo[111011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksomwzhwhkvviukojqrrtwtnlvbrjbjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839194.7668931-47-55289593867088/AnsiballZ_setup.py'
Jan 31 05:59:55 compute-0 sudo[111011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:55 compute-0 python3.9[111013]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 05:59:55 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 05:59:55 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 05:59:55 compute-0 sudo[111011]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 05:59:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 05:59:56 compute-0 sudo[111206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wddoapgqtsnvttytilzgslyutddvpzww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839195.712588-58-226281025427886/AnsiballZ_file.py'
Jan 31 05:59:56 compute-0 sudo[111206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:56 compute-0 python3.9[111208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:59:56 compute-0 sudo[111206]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:56 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 05:59:56 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 05:59:56 compute-0 ceph-mon[75251]: pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:56 compute-0 ceph-mon[75251]: 8.15 scrub starts
Jan 31 05:59:56 compute-0 ceph-mon[75251]: 8.15 scrub ok
Jan 31 05:59:56 compute-0 sudo[111358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakuctqxmffloitoqhaxysxsjwneghst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839196.4459238-66-82565565137777/AnsiballZ_command.py'
Jan 31 05:59:56 compute-0 sudo[111358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:57 compute-0 python3.9[111360]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 05:59:57 compute-0 sudo[111358]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:57 compute-0 ceph-mon[75251]: 11.15 scrub starts
Jan 31 05:59:57 compute-0 ceph-mon[75251]: 11.15 scrub ok
Jan 31 05:59:57 compute-0 sudo[111521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslzdvfcskjvheizzfiptkbesdunnzdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839197.3936377-74-189262683594374/AnsiballZ_stat.py'
Jan 31 05:59:57 compute-0 sudo[111521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:57 compute-0 python3.9[111523]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:59:58 compute-0 sudo[111521]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 05:59:58 compute-0 sudo[111599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejivpjqwqkcszhzdwdgwsgcyqkcziiwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839197.3936377-74-189262683594374/AnsiballZ_file.py'
Jan 31 05:59:58 compute-0 sudo[111599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:58 compute-0 python3.9[111601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 05:59:58 compute-0 sudo[111599]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:58 compute-0 ceph-mon[75251]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:58 compute-0 sudo[111751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slzrvitrziinjxylqngemfzpgdlaqbda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839198.5663667-86-65265528522619/AnsiballZ_stat.py'
Jan 31 05:59:58 compute-0 sudo[111751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:59 compute-0 python3.9[111753]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 05:59:59 compute-0 sudo[111751]: pam_unix(sudo:session): session closed for user root
Jan 31 05:59:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 05:59:59 compute-0 sudo[111829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-momafybenophpialtjqcuwrgcbcurudd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839198.5663667-86-65265528522619/AnsiballZ_file.py'
Jan 31 05:59:59 compute-0 sudo[111829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 05:59:59 compute-0 python3.9[111831]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 05:59:59 compute-0 sudo[111829]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:00 compute-0 sudo[111981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxftoeqjzumgyhgmvqgpixhtwfenkidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839199.7291837-99-80369768201817/AnsiballZ_ini_file.py'
Jan 31 06:00:00 compute-0 sudo[111981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:00 compute-0 python3.9[111983]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:00 compute-0 sudo[111981]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:00 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 06:00:00 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 06:00:00 compute-0 ceph-mon[75251]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:00 compute-0 sudo[112133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzxhtxsltbdiicihakkrzvhafspinhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839200.4257333-99-220430853935559/AnsiballZ_ini_file.py'
Jan 31 06:00:00 compute-0 sudo[112133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:00 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 06:00:00 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 06:00:00 compute-0 python3.9[112135]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:00 compute-0 sudo[112133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:01 compute-0 sudo[112285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdscwabbpclgfylqfnlbrjxttkfiqhcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839201.038265-99-99917703311825/AnsiballZ_ini_file.py'
Jan 31 06:00:01 compute-0 sudo[112285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:01 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 06:00:01 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 06:00:01 compute-0 python3.9[112287]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:01 compute-0 sudo[112285]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:01 compute-0 sudo[112437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpukthgudugsjuqwowdnzyldxpijhaco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839201.6032667-99-99041731740111/AnsiballZ_ini_file.py'
Jan 31 06:00:01 compute-0 sudo[112437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 06:00:02 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 06:00:02 compute-0 ceph-mon[75251]: 11.3 scrub starts
Jan 31 06:00:02 compute-0 ceph-mon[75251]: 11.3 scrub ok
Jan 31 06:00:02 compute-0 ceph-mon[75251]: 10.d scrub starts
Jan 31 06:00:02 compute-0 ceph-mon[75251]: 10.d scrub ok
Jan 31 06:00:02 compute-0 python3.9[112439]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:02 compute-0 sudo[112437]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:03 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 06:00:03 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 06:00:03 compute-0 sudo[112590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owslxldkoditquurhwjfaisranafwvca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839203.0360312-130-263243623538782/AnsiballZ_dnf.py'
Jan 31 06:00:03 compute-0 sudo[112590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:03 compute-0 python3.9[112592]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:00:03 compute-0 ceph-mon[75251]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:03 compute-0 ceph-mon[75251]: 8.4 scrub starts
Jan 31 06:00:03 compute-0 ceph-mon[75251]: 8.4 scrub ok
Jan 31 06:00:03 compute-0 ceph-mon[75251]: 10.2 scrub starts
Jan 31 06:00:03 compute-0 ceph-mon[75251]: 10.2 scrub ok
Jan 31 06:00:03 compute-0 ceph-mon[75251]: pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:04 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 06:00:04 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 06:00:04 compute-0 sudo[112590]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:05 compute-0 ceph-mon[75251]: 6.f scrub starts
Jan 31 06:00:05 compute-0 ceph-mon[75251]: 6.f scrub ok
Jan 31 06:00:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:05 compute-0 sudo[112743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igivbldnzakubspwzyrrmebytfgyydgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839205.3334646-141-236601159188248/AnsiballZ_setup.py'
Jan 31 06:00:05 compute-0 sudo[112743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:05 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 06:00:05 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 06:00:05 compute-0 python3.9[112745]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:00:05 compute-0 sudo[112743]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:06 compute-0 ceph-mon[75251]: 11.8 scrub starts
Jan 31 06:00:06 compute-0 ceph-mon[75251]: 11.8 scrub ok
Jan 31 06:00:06 compute-0 ceph-mon[75251]: pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:06 compute-0 sudo[112897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbidwaedfjesdotmsuucyqieffnigntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839206.022066-149-132054438305078/AnsiballZ_stat.py'
Jan 31 06:00:06 compute-0 sudo[112897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:06 compute-0 python3.9[112899]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:00:06 compute-0 sudo[112897]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 06:00:06 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 06:00:06 compute-0 sudo[113049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrxaqcwuvahmxuzlngyemrnptvnmkfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839206.656276-158-137381341131518/AnsiballZ_stat.py'
Jan 31 06:00:06 compute-0 sudo[113049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:07 compute-0 ceph-mon[75251]: 10.11 scrub starts
Jan 31 06:00:07 compute-0 ceph-mon[75251]: 10.11 scrub ok
Jan 31 06:00:07 compute-0 python3.9[113051]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:00:07 compute-0 sudo[113049]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:07 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 06:00:07 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 06:00:07 compute-0 sudo[113201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmdksksjfvvdunouiktmruogcdzzpqyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839207.4355412-168-514143213042/AnsiballZ_command.py'
Jan 31 06:00:07 compute-0 sudo[113201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:07 compute-0 python3.9[113203]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:00:07 compute-0 sudo[113201]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:08 compute-0 ceph-mon[75251]: 8.f scrub starts
Jan 31 06:00:08 compute-0 ceph-mon[75251]: 8.f scrub ok
Jan 31 06:00:08 compute-0 ceph-mon[75251]: pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:08 compute-0 ceph-mon[75251]: 11.9 scrub starts
Jan 31 06:00:08 compute-0 ceph-mon[75251]: 11.9 scrub ok
Jan 31 06:00:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 06:00:08 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 06:00:08 compute-0 sudo[113354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbksdpyohffrzsfeqrpomyoqwneekns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839208.1166036-178-177356143253186/AnsiballZ_service_facts.py'
Jan 31 06:00:08 compute-0 sudo[113354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:08 compute-0 python3.9[113356]: ansible-service_facts Invoked
Jan 31 06:00:08 compute-0 network[113373]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:00:08 compute-0 network[113374]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:00:08 compute-0 network[113375]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:00:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:09 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 06:00:09 compute-0 ceph-mon[75251]: 8.11 scrub starts
Jan 31 06:00:09 compute-0 ceph-mon[75251]: 8.11 scrub ok
Jan 31 06:00:09 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 06:00:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 06:00:09 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 06:00:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 06:00:10 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 06:00:10 compute-0 ceph-mon[75251]: pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:10 compute-0 ceph-mon[75251]: 9.8 scrub starts
Jan 31 06:00:10 compute-0 ceph-mon[75251]: 9.8 scrub ok
Jan 31 06:00:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:11 compute-0 ceph-mon[75251]: 10.e scrub starts
Jan 31 06:00:11 compute-0 ceph-mon[75251]: 10.e scrub ok
Jan 31 06:00:11 compute-0 ceph-mon[75251]: 9.e scrub starts
Jan 31 06:00:11 compute-0 ceph-mon[75251]: 9.e scrub ok
Jan 31 06:00:11 compute-0 sudo[113354]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:12 compute-0 ceph-mon[75251]: pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:12 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 06:00:12 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 06:00:12 compute-0 sudo[113658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmzahbfxhyzlkqhdnwsjdugiwzjdwub ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769839212.646576-193-173638948731429/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769839212.646576-193-173638948731429/args'
Jan 31 06:00:12 compute-0 sudo[113658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:13 compute-0 sudo[113658]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 06:00:13 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 06:00:13 compute-0 ceph-mon[75251]: 10.10 scrub starts
Jan 31 06:00:13 compute-0 ceph-mon[75251]: 10.10 scrub ok
Jan 31 06:00:13 compute-0 sudo[113825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stdmelxjfgnwxhvklfezifrzmdpesivg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839213.287713-204-118082994707133/AnsiballZ_dnf.py'
Jan 31 06:00:13 compute-0 sudo[113825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 06:00:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 06:00:13 compute-0 python3.9[113827]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:00:14 compute-0 ceph-mon[75251]: pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:14 compute-0 ceph-mon[75251]: 9.18 scrub starts
Jan 31 06:00:14 compute-0 ceph-mon[75251]: 9.18 scrub ok
Jan 31 06:00:15 compute-0 sudo[113825]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 06:00:15 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 06:00:15 compute-0 ceph-mon[75251]: 9.1c scrub starts
Jan 31 06:00:15 compute-0 ceph-mon[75251]: 9.1c scrub ok
Jan 31 06:00:15 compute-0 sudo[113978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyidserkhkhludwbdjxxnjvgbduymfbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839215.3777843-217-241852550545106/AnsiballZ_package_facts.py'
Jan 31 06:00:15 compute-0 sudo[113978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:16 compute-0 python3.9[113980]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 06:00:16 compute-0 sudo[113978]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:16 compute-0 ceph-mon[75251]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:16 compute-0 ceph-mon[75251]: 9.1b scrub starts
Jan 31 06:00:16 compute-0 ceph-mon[75251]: 9.1b scrub ok
Jan 31 06:00:17 compute-0 sudo[114130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukirwoycvefogehmsmcjlzcbptofupny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839216.9033077-227-192064614204761/AnsiballZ_stat.py'
Jan 31 06:00:17 compute-0 sudo[114130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:17 compute-0 python3.9[114132]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:17 compute-0 sudo[114130]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:17 compute-0 sudo[114208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgmqpjitgmsiuobgygcvxzacoubqcil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839216.9033077-227-192064614204761/AnsiballZ_file.py'
Jan 31 06:00:17 compute-0 sudo[114208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:17 compute-0 python3.9[114210]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:17 compute-0 sudo[114208]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:18 compute-0 ceph-mon[75251]: pgmap v320: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:18 compute-0 sudo[114360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnndmwkznmkxcyhkibpvzbvumjdehgtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839218.0662642-239-88516347027372/AnsiballZ_stat.py'
Jan 31 06:00:18 compute-0 sudo[114360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:18 compute-0 python3.9[114362]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:18 compute-0 sudo[114360]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 06:00:18 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 06:00:18 compute-0 sudo[114438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexrcmtimehjccoxucmtkjekzieddxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839218.0662642-239-88516347027372/AnsiballZ_file.py'
Jan 31 06:00:18 compute-0 sudo[114438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:19 compute-0 python3.9[114440]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:19 compute-0 sudo[114438]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:19 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 06:00:19 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 06:00:19 compute-0 sudo[114590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hduthuidqxuprxhuhjqzuhlhsxogyljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839219.5712724-257-129776349314068/AnsiballZ_lineinfile.py'
Jan 31 06:00:19 compute-0 sudo[114590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:19 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 06:00:19 compute-0 ceph-mon[75251]: 9.3 scrub starts
Jan 31 06:00:19 compute-0 ceph-mon[75251]: 9.3 scrub ok
Jan 31 06:00:19 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:19.993586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:00:19 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 06:00:19 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839219993749, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7365, "num_deletes": 251, "total_data_size": 9904972, "memory_usage": 10082256, "flush_reason": "Manual Compaction"}
Jan 31 06:00:19 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 06:00:20 compute-0 python3.9[114592]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220253717, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7928451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7508, "table_properties": {"data_size": 7900563, "index_size": 18429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77728, "raw_average_key_size": 23, "raw_value_size": 7835732, "raw_average_value_size": 2345, "num_data_blocks": 808, "num_entries": 3341, "num_filter_entries": 3341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838806, "oldest_key_time": 1769838806, "file_creation_time": 1769839219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 260222 microseconds, and 23269 cpu microseconds.
Jan 31 06:00:20 compute-0 sudo[114590]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.253816) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7928451 bytes OK
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.253857) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.273361) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.273447) EVENT_LOG_v1 {"time_micros": 1769839220273431, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.273511) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9872678, prev total WAL file size 9872678, number of live WAL files 2.
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.275965) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7742KB) 13(58KB) 8(1944B)]
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220276081, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7990339, "oldest_snapshot_seqno": -1}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3167 keys, 7943240 bytes, temperature: kUnknown
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220607919, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7943240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7915749, "index_size": 18485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76168, "raw_average_key_size": 24, "raw_value_size": 7852244, "raw_average_value_size": 2479, "num_data_blocks": 812, "num_entries": 3167, "num_filter_entries": 3167, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769839220, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.608205) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7943240 bytes
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.650707) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.1 rd, 23.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3456, records dropped: 289 output_compression: NoCompression
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.650763) EVENT_LOG_v1 {"time_micros": 1769839220650741, "job": 4, "event": "compaction_finished", "compaction_time_micros": 331905, "compaction_time_cpu_micros": 25283, "output_level": 6, "num_output_files": 1, "total_output_size": 7943240, "num_input_records": 3456, "num_output_records": 3167, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220651930, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220651988, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839220652016, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 06:00:20 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:00:20.275780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:00:21 compute-0 ceph-mon[75251]: pgmap v321: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:21 compute-0 ceph-mon[75251]: 9.13 scrub starts
Jan 31 06:00:21 compute-0 ceph-mon[75251]: 9.13 scrub ok
Jan 31 06:00:21 compute-0 sudo[114743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvwxhoxwkwbimwjorktxhprqdlhctlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839220.839668-272-196343851726884/AnsiballZ_setup.py'
Jan 31 06:00:21 compute-0 sudo[114743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:21 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 06:00:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:21 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 06:00:21 compute-0 python3.9[114745]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:00:21 compute-0 sudo[114743]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:22 compute-0 ceph-mon[75251]: 9.19 scrub starts
Jan 31 06:00:22 compute-0 ceph-mon[75251]: pgmap v322: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:22 compute-0 ceph-mon[75251]: 9.19 scrub ok
Jan 31 06:00:22 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 06:00:22 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 06:00:22 compute-0 sudo[114827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whusxczhdxoxncjcdrgcchnelvmrvtve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839220.839668-272-196343851726884/AnsiballZ_systemd.py'
Jan 31 06:00:22 compute-0 sudo[114827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:22 compute-0 python3.9[114829]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:00:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 06:00:22 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 06:00:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 06:00:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 06:00:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:23 compute-0 ceph-mon[75251]: 9.6 scrub starts
Jan 31 06:00:23 compute-0 ceph-mon[75251]: 9.6 scrub ok
Jan 31 06:00:23 compute-0 ceph-mon[75251]: 10.13 scrub starts
Jan 31 06:00:23 compute-0 ceph-mon[75251]: 10.13 scrub ok
Jan 31 06:00:23 compute-0 sudo[114827]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:24 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 06:00:24 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 06:00:24 compute-0 sshd-session[110316]: Connection closed by 192.168.122.30 port 53260
Jan 31 06:00:24 compute-0 sshd-session[110313]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:00:24 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 06:00:24 compute-0 systemd[1]: session-37.scope: Consumed 20.236s CPU time.
Jan 31 06:00:24 compute-0 systemd-logind[797]: Session 37 logged out. Waiting for processes to exit.
Jan 31 06:00:24 compute-0 systemd-logind[797]: Removed session 37.
Jan 31 06:00:24 compute-0 ceph-mon[75251]: 9.1d scrub starts
Jan 31 06:00:24 compute-0 ceph-mon[75251]: 9.1d scrub ok
Jan 31 06:00:24 compute-0 ceph-mon[75251]: pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:24 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 06:00:24 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 06:00:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:25 compute-0 ceph-mon[75251]: 9.7 scrub starts
Jan 31 06:00:25 compute-0 ceph-mon[75251]: 9.7 scrub ok
Jan 31 06:00:25 compute-0 ceph-mon[75251]: 10.f scrub starts
Jan 31 06:00:25 compute-0 ceph-mon[75251]: 10.f scrub ok
Jan 31 06:00:26 compute-0 sudo[114856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:00:26 compute-0 sudo[114856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:26 compute-0 sudo[114856]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:26 compute-0 sudo[114881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:00:26 compute-0 sudo[114881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:26 compute-0 sudo[114881]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:26 compute-0 ceph-mon[75251]: pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:00:26 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:00:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:00:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:00:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:00:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:00:26 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:00:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:00:26 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:00:27 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:00:27 compute-0 sudo[114938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:00:27 compute-0 sudo[114938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:27 compute-0 sudo[114938]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:27 compute-0 sudo[114963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:00:27 compute-0 sudo[114963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:27 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 06:00:27 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.339576743 +0000 UTC m=+0.045776885 container create 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:00:27 compute-0 systemd[1]: Started libpod-conmon-029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7.scope.
Jan 31 06:00:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.315932695 +0000 UTC m=+0.022132817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.420040704 +0000 UTC m=+0.126240826 container init 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.423731537 +0000 UTC m=+0.129931639 container start 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:00:27 compute-0 sharp_fermi[115018]: 167 167
Jan 31 06:00:27 compute-0 systemd[1]: libpod-029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7.scope: Deactivated successfully.
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.429031334 +0000 UTC m=+0.135231456 container attach 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:00:27 compute-0 conmon[115018]: conmon 029484aee9fd59bd83bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7.scope/container/memory.events
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.429737514 +0000 UTC m=+0.135937646 container died 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 06:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ffeec98a9b063c400908bc4614b48ed2833fd60ccd33736303ffb884ded78f-merged.mount: Deactivated successfully.
Jan 31 06:00:27 compute-0 podman[115001]: 2026-01-31 06:00:27.476732772 +0000 UTC m=+0.182932874 container remove 029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 06:00:27 compute-0 systemd[1]: libpod-conmon-029484aee9fd59bd83bfae741bbd7ec6d7fd30cd341622b92124f48d32d368a7.scope: Deactivated successfully.
Jan 31 06:00:27 compute-0 podman[115041]: 2026-01-31 06:00:27.601723732 +0000 UTC m=+0.038664637 container create bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:00:27 compute-0 systemd[1]: Started libpod-conmon-bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1.scope.
Jan 31 06:00:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:27 compute-0 podman[115041]: 2026-01-31 06:00:27.585336936 +0000 UTC m=+0.022277851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:27 compute-0 podman[115041]: 2026-01-31 06:00:27.692356376 +0000 UTC m=+0.129297251 container init bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:00:27 compute-0 podman[115041]: 2026-01-31 06:00:27.702001234 +0000 UTC m=+0.138942139 container start bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:00:27 compute-0 podman[115041]: 2026-01-31 06:00:27.706457108 +0000 UTC m=+0.143398023 container attach bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:00:27 compute-0 ceph-mon[75251]: pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:28 compute-0 bold_saha[115058]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:00:28 compute-0 bold_saha[115058]: --> All data devices are unavailable
Jan 31 06:00:28 compute-0 systemd[1]: libpod-bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1.scope: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115041]: 2026-01-31 06:00:28.093784462 +0000 UTC m=+0.530725357 container died bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c76c23ca2b90a38a2cd97fe76c4bd34489620cdfd89bb31e27794dd81a2c193-merged.mount: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115041]: 2026-01-31 06:00:28.125865566 +0000 UTC m=+0.562806431 container remove bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 06:00:28 compute-0 systemd[1]: libpod-conmon-bf6554b3ac1ed741a3b070ca5a4470fcb994594bb83d4df7288b68273b7ce2c1.scope: Deactivated successfully.
Jan 31 06:00:28 compute-0 sudo[114963]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:28 compute-0 sudo[115090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:00:28 compute-0 sudo[115090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:28 compute-0 sudo[115090]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:28 compute-0 sudo[115115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:00:28 compute-0 sudo[115115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 06:00:28 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.450435081 +0000 UTC m=+0.030861190 container create 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 06:00:28 compute-0 systemd[1]: Started libpod-conmon-716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee.scope.
Jan 31 06:00:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.504095245 +0000 UTC m=+0.084521364 container init 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.507974383 +0000 UTC m=+0.088400492 container start 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:00:28 compute-0 ecstatic_hertz[115168]: 167 167
Jan 31 06:00:28 compute-0 systemd[1]: libpod-716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee.scope: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.51144736 +0000 UTC m=+0.091873469 container attach 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.511663666 +0000 UTC m=+0.092089775 container died 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1afd8266f104139fc2ad62a05d0e99e672c32a1f8f3c3e72335ca874d5c3f87e-merged.mount: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.436225666 +0000 UTC m=+0.016651815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:28 compute-0 podman[115152]: 2026-01-31 06:00:28.544222763 +0000 UTC m=+0.124648872 container remove 716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_hertz, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:00:28 compute-0 systemd[1]: libpod-conmon-716e3bae62edef24c3525fc08451aa94f2d003e1bb2b5cc72549e1806fa1a7ee.scope: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.642086717 +0000 UTC m=+0.033717049 container create a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:00:28 compute-0 systemd[1]: Started libpod-conmon-a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e.scope.
Jan 31 06:00:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831a90f7d57af1294b7dcfd9a62a16de9cba3ef0b4b8615725ded62c8eda4fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831a90f7d57af1294b7dcfd9a62a16de9cba3ef0b4b8615725ded62c8eda4fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831a90f7d57af1294b7dcfd9a62a16de9cba3ef0b4b8615725ded62c8eda4fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831a90f7d57af1294b7dcfd9a62a16de9cba3ef0b4b8615725ded62c8eda4fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.695185176 +0000 UTC m=+0.086815528 container init a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.699615089 +0000 UTC m=+0.091245421 container start a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.702053797 +0000 UTC m=+0.093684119 container attach a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.628819788 +0000 UTC m=+0.020450140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:28 compute-0 ceph-mon[75251]: 9.c scrub starts
Jan 31 06:00:28 compute-0 ceph-mon[75251]: 9.c scrub ok
Jan 31 06:00:28 compute-0 youthful_euler[115209]: {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     "0": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "devices": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "/dev/loop3"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             ],
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_name": "ceph_lv0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_size": "21470642176",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "name": "ceph_lv0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "tags": {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_name": "ceph",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.crush_device_class": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.encrypted": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.objectstore": "bluestore",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_id": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.vdo": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.with_tpm": "0"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             },
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "vg_name": "ceph_vg0"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         }
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     ],
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     "1": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "devices": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "/dev/loop4"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             ],
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_name": "ceph_lv1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_size": "21470642176",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "name": "ceph_lv1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "tags": {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_name": "ceph",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.crush_device_class": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.encrypted": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.objectstore": "bluestore",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_id": "1",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.vdo": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.with_tpm": "0"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             },
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "vg_name": "ceph_vg1"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         }
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     ],
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     "2": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "devices": [
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "/dev/loop5"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             ],
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_name": "ceph_lv2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_size": "21470642176",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "name": "ceph_lv2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "tags": {
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.cluster_name": "ceph",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.crush_device_class": "",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.encrypted": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.objectstore": "bluestore",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osd_id": "2",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.vdo": "0",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:                 "ceph.with_tpm": "0"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             },
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "type": "block",
Jan 31 06:00:28 compute-0 youthful_euler[115209]:             "vg_name": "ceph_vg2"
Jan 31 06:00:28 compute-0 youthful_euler[115209]:         }
Jan 31 06:00:28 compute-0 youthful_euler[115209]:     ]
Jan 31 06:00:28 compute-0 youthful_euler[115209]: }
Jan 31 06:00:28 compute-0 systemd[1]: libpod-a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e.scope: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.952813919 +0000 UTC m=+0.344444271 container died a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 06:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f831a90f7d57af1294b7dcfd9a62a16de9cba3ef0b4b8615725ded62c8eda4fb-merged.mount: Deactivated successfully.
Jan 31 06:00:28 compute-0 podman[115193]: 2026-01-31 06:00:28.997845472 +0000 UTC m=+0.389475794 container remove a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_euler, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 06:00:29 compute-0 systemd[1]: libpod-conmon-a951f6fb170ac7030c9c83a9a614db5de0f662448138110e9d057bc465291e5e.scope: Deactivated successfully.
Jan 31 06:00:29 compute-0 sudo[115115]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:29 compute-0 sudo[115231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:00:29 compute-0 sudo[115231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:29 compute-0 sudo[115231]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:29 compute-0 sudo[115256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:00:29 compute-0 sudo[115256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:29 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 06:00:29 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.348540196 +0000 UTC m=+0.047779601 container create f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:00:29 compute-0 systemd[1]: Started libpod-conmon-f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e.scope.
Jan 31 06:00:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.319392525 +0000 UTC m=+0.018631940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.417241469 +0000 UTC m=+0.116480874 container init f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.420925992 +0000 UTC m=+0.120165397 container start f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:00:29 compute-0 compassionate_hopper[115308]: 167 167
Jan 31 06:00:29 compute-0 systemd[1]: libpod-f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e.scope: Deactivated successfully.
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.440032124 +0000 UTC m=+0.139271559 container attach f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.440476256 +0000 UTC m=+0.139715651 container died f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2ca8fb2a5ae85fe08041d382c25616dc3967d3e92a8db08f5d1b567ed011c44-merged.mount: Deactivated successfully.
Jan 31 06:00:29 compute-0 podman[115292]: 2026-01-31 06:00:29.645295009 +0000 UTC m=+0.344534414 container remove f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:00:29 compute-0 systemd[1]: libpod-conmon-f76f32a9937a739a2eb1f73cf376c5915c9f576605a8068a0a65eeec209aa44e.scope: Deactivated successfully.
Jan 31 06:00:29 compute-0 podman[115334]: 2026-01-31 06:00:29.762759509 +0000 UTC m=+0.052505843 container create 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:00:29 compute-0 systemd[1]: Started libpod-conmon-9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599.scope.
Jan 31 06:00:29 compute-0 podman[115334]: 2026-01-31 06:00:29.725520783 +0000 UTC m=+0.015267147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:00:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd02f626e75a518ce8b63fa1b70d98e4bb22940711da6193e4620a4eda35e1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd02f626e75a518ce8b63fa1b70d98e4bb22940711da6193e4620a4eda35e1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd02f626e75a518ce8b63fa1b70d98e4bb22940711da6193e4620a4eda35e1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd02f626e75a518ce8b63fa1b70d98e4bb22940711da6193e4620a4eda35e1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:00:29 compute-0 ceph-mon[75251]: 9.f scrub starts
Jan 31 06:00:29 compute-0 ceph-mon[75251]: 9.f scrub ok
Jan 31 06:00:29 compute-0 ceph-mon[75251]: pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:29 compute-0 podman[115334]: 2026-01-31 06:00:29.847676883 +0000 UTC m=+0.137423237 container init 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:00:29 compute-0 podman[115334]: 2026-01-31 06:00:29.853139206 +0000 UTC m=+0.142885530 container start 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:00:29 compute-0 podman[115334]: 2026-01-31 06:00:29.857211289 +0000 UTC m=+0.146957643 container attach 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:00:30 compute-0 lvm[115427]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:00:30 compute-0 lvm[115427]: VG ceph_vg0 finished
Jan 31 06:00:30 compute-0 lvm[115430]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:00:30 compute-0 lvm[115430]: VG ceph_vg1 finished
Jan 31 06:00:30 compute-0 lvm[115432]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:00:30 compute-0 lvm[115432]: VG ceph_vg2 finished
Jan 31 06:00:30 compute-0 stupefied_bouman[115351]: {}
Jan 31 06:00:30 compute-0 systemd[1]: libpod-9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599.scope: Deactivated successfully.
Jan 31 06:00:30 compute-0 sshd-session[115434]: Accepted publickey for zuul from 192.168.122.30 port 49338 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:00:30 compute-0 systemd-logind[797]: New session 38 of user zuul.
Jan 31 06:00:30 compute-0 podman[115437]: 2026-01-31 06:00:30.624677457 +0000 UTC m=+0.017328454 container died 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:00:30 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 31 06:00:30 compute-0 sshd-session[115434]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cd02f626e75a518ce8b63fa1b70d98e4bb22940711da6193e4620a4eda35e1e-merged.mount: Deactivated successfully.
Jan 31 06:00:30 compute-0 podman[115437]: 2026-01-31 06:00:30.691741554 +0000 UTC m=+0.084392531 container remove 9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bouman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:00:30 compute-0 systemd[1]: libpod-conmon-9f3c9deca06d2c411d5eb6946a69871b4cd6c3b2086e14e61c33d1f36389f599.scope: Deactivated successfully.
Jan 31 06:00:30 compute-0 sudo[115256]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:00:30 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:30 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:00:30 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:30 compute-0 sudo[115500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:00:30 compute-0 sudo[115500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:00:30 compute-0 sudo[115500]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 06:00:30 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 06:00:30 compute-0 ceph-mon[75251]: 9.17 scrub starts
Jan 31 06:00:30 compute-0 ceph-mon[75251]: 9.17 scrub ok
Jan 31 06:00:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:00:31 compute-0 sudo[115628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnhhvpqzxitdldpdivqbremabihsrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839230.7166123-17-251653214438675/AnsiballZ_file.py'
Jan 31 06:00:31 compute-0 sudo[115628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:31 compute-0 python3.9[115630]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:31 compute-0 sudo[115628]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:31 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 06:00:31 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 06:00:31 compute-0 sudo[115780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giznegplqivenswodauulhsumsmhnxqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839231.4384372-29-210610377206916/AnsiballZ_stat.py'
Jan 31 06:00:31 compute-0 sudo[115780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:31 compute-0 ceph-mon[75251]: 9.1 scrub starts
Jan 31 06:00:31 compute-0 ceph-mon[75251]: 9.1 scrub ok
Jan 31 06:00:31 compute-0 ceph-mon[75251]: pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:32 compute-0 python3.9[115782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:32 compute-0 sudo[115780]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:32 compute-0 sudo[115858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vehrtgjgjoowekqrwzrpclwtzrtrkpdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839231.4384372-29-210610377206916/AnsiballZ_file.py'
Jan 31 06:00:32 compute-0 sudo[115858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:32 compute-0 python3.9[115860]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:32 compute-0 sudo[115858]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 06:00:32 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 06:00:32 compute-0 sshd-session[115451]: Connection closed by 192.168.122.30 port 49338
Jan 31 06:00:32 compute-0 sshd-session[115434]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:00:32 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 06:00:32 compute-0 systemd[1]: session-38.scope: Consumed 1.187s CPU time.
Jan 31 06:00:32 compute-0 systemd-logind[797]: Session 38 logged out. Waiting for processes to exit.
Jan 31 06:00:32 compute-0 systemd-logind[797]: Removed session 38.
Jan 31 06:00:32 compute-0 ceph-mon[75251]: 9.d scrub starts
Jan 31 06:00:32 compute-0 ceph-mon[75251]: 9.d scrub ok
Jan 31 06:00:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 06:00:33 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 06:00:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 06:00:33 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 06:00:33 compute-0 ceph-mon[75251]: 10.6 scrub starts
Jan 31 06:00:33 compute-0 ceph-mon[75251]: 10.6 scrub ok
Jan 31 06:00:33 compute-0 ceph-mon[75251]: pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:34 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 06:00:34 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 06:00:34 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 06:00:34 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 06:00:34 compute-0 ceph-mon[75251]: 10.b scrub starts
Jan 31 06:00:34 compute-0 ceph-mon[75251]: 10.b scrub ok
Jan 31 06:00:34 compute-0 ceph-mon[75251]: 9.9 scrub starts
Jan 31 06:00:34 compute-0 ceph-mon[75251]: 9.9 scrub ok
Jan 31 06:00:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:35 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 06:00:35 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 06:00:35 compute-0 ceph-mon[75251]: 10.19 scrub starts
Jan 31 06:00:35 compute-0 ceph-mon[75251]: 10.19 scrub ok
Jan 31 06:00:35 compute-0 ceph-mon[75251]: 9.16 scrub starts
Jan 31 06:00:35 compute-0 ceph-mon[75251]: 9.16 scrub ok
Jan 31 06:00:35 compute-0 ceph-mon[75251]: pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:36 compute-0 ceph-mon[75251]: 9.b scrub starts
Jan 31 06:00:36 compute-0 ceph-mon[75251]: 9.b scrub ok
Jan 31 06:00:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 06:00:37 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 06:00:37 compute-0 ceph-mon[75251]: pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:38 compute-0 sshd-session[115885]: Accepted publickey for zuul from 192.168.122.30 port 44984 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:00:38 compute-0 systemd-logind[797]: New session 39 of user zuul.
Jan 31 06:00:38 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 31 06:00:38 compute-0 sshd-session[115885]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:00:39 compute-0 ceph-mon[75251]: 9.5 scrub starts
Jan 31 06:00:39 compute-0 ceph-mon[75251]: 9.5 scrub ok
Jan 31 06:00:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:39 compute-0 python3.9[116038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:00:39 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 06:00:39 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 06:00:40 compute-0 ceph-mon[75251]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:40 compute-0 sudo[116192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhqdknyuiistymkehddiveilwxiukbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839239.954085-28-5121289791850/AnsiballZ_file.py'
Jan 31 06:00:40 compute-0 sudo[116192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:40 compute-0 python3.9[116194]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:40 compute-0 sudo[116192]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:41 compute-0 sudo[116367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwvatlfwbmvbxsvsxaluwadxqelbvskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839240.6551745-36-21782250327538/AnsiballZ_stat.py'
Jan 31 06:00:41 compute-0 sudo[116367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:41 compute-0 ceph-mon[75251]: 9.11 scrub starts
Jan 31 06:00:41 compute-0 ceph-mon[75251]: 9.11 scrub ok
Jan 31 06:00:41 compute-0 python3.9[116369]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:41 compute-0 sudo[116367]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:41 compute-0 sudo[116445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngyfhefupvhjarkrvtiaqrpytrhkwqct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839240.6551745-36-21782250327538/AnsiballZ_file.py'
Jan 31 06:00:41 compute-0 sudo[116445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:41 compute-0 python3.9[116447]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.68t8mhpp recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:41 compute-0 sudo[116445]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:42 compute-0 ceph-mon[75251]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:42 compute-0 sudo[116597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rewzhfoffalzmvldjwogjwfaadftemjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839241.9297924-56-32786484967972/AnsiballZ_stat.py'
Jan 31 06:00:42 compute-0 sudo[116597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:42 compute-0 python3.9[116599]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:42 compute-0 sudo[116597]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:42 compute-0 sudo[116675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoszepfpgtnsfscfyyqscpidappmdgsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839241.9297924-56-32786484967972/AnsiballZ_file.py'
Jan 31 06:00:42 compute-0 sudo[116675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:42 compute-0 python3.9[116677]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.4zlj4lmf recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:42 compute-0 sudo[116675]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:43 compute-0 sudo[116827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzoncvupcvmiwqdmqbeynmkyfruafloh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839242.8862636-69-189741696468415/AnsiballZ_file.py'
Jan 31 06:00:43 compute-0 sudo[116827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:43 compute-0 python3.9[116829]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:43 compute-0 sudo[116827]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:43 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 06:00:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 06:00:43 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 06:00:43 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 06:00:43 compute-0 sudo[116980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nydmhqtfblhhcyvjtmewduurwckwtzlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839243.637776-77-215557536398352/AnsiballZ_stat.py'
Jan 31 06:00:43 compute-0 sudo[116980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:44 compute-0 python3.9[116982]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:44 compute-0 sudo[116980]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:44 compute-0 sudo[117058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkwubymbrcqwlxdcsjocyboiwlabavm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839243.637776-77-215557536398352/AnsiballZ_file.py'
Jan 31 06:00:44 compute-0 sudo[117058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:00:44
Jan 31 06:00:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:00:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:00:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 31 06:00:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:00:44 compute-0 python3.9[117060]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:44 compute-0 sudo[117058]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:44 compute-0 ceph-mon[75251]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:44 compute-0 ceph-mon[75251]: 10.12 scrub starts
Jan 31 06:00:44 compute-0 ceph-mon[75251]: 10.12 scrub ok
Jan 31 06:00:44 compute-0 sudo[117210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzhoqcxfdopqwegnanfjnujbfbrassc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839244.560173-77-258535732432394/AnsiballZ_stat.py'
Jan 31 06:00:44 compute-0 sudo[117210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:45 compute-0 python3.9[117212]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:45 compute-0 sudo[117210]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:45 compute-0 sudo[117288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbxiywhjglzgkvdjivfyrommubfmgjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839244.560173-77-258535732432394/AnsiballZ_file.py'
Jan 31 06:00:45 compute-0 sudo[117288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:00:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:00:45 compute-0 python3.9[117290]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:00:45 compute-0 sudo[117288]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:45 compute-0 ceph-mon[75251]: 9.1e scrub starts
Jan 31 06:00:45 compute-0 ceph-mon[75251]: 9.1e scrub ok
Jan 31 06:00:45 compute-0 sudo[117440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpqmkwttvyihsuldmxzaouzvakfuxwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839245.6344-100-3098457789438/AnsiballZ_file.py'
Jan 31 06:00:45 compute-0 sudo[117440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:46 compute-0 python3.9[117442]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:46 compute-0 sudo[117440]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:46 compute-0 sudo[117592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtpyikijnjhfxiymcqqrdwktdsfeeibn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839246.2080715-108-117140415027648/AnsiballZ_stat.py'
Jan 31 06:00:46 compute-0 sudo[117592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:46 compute-0 ceph-mon[75251]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:46 compute-0 python3.9[117594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:46 compute-0 sudo[117592]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:46 compute-0 sudo[117670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdgthmuqrfftzdgehsosuefgfrxpopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839246.2080715-108-117140415027648/AnsiballZ_file.py'
Jan 31 06:00:46 compute-0 sudo[117670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:47 compute-0 python3.9[117672]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:47 compute-0 sudo[117670]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:47 compute-0 sudo[117822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjtvpfzecqjamsvakbjebecnyapfdzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839247.3355289-120-242954650176512/AnsiballZ_stat.py'
Jan 31 06:00:47 compute-0 sudo[117822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:47 compute-0 python3.9[117824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:47 compute-0 ceph-mon[75251]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:47 compute-0 sudo[117822]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:47 compute-0 sudo[117900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwtorjfngioyeyyhvnfdufliomdqhgmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839247.3355289-120-242954650176512/AnsiballZ_file.py'
Jan 31 06:00:47 compute-0 sudo[117900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:48 compute-0 python3.9[117902]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:48 compute-0 sudo[117900]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:48 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 06:00:48 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 06:00:48 compute-0 sudo[118052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dufhxnrtlyetjjihekuxhuhkyurvxupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839248.2837665-132-112968657552343/AnsiballZ_systemd.py'
Jan 31 06:00:48 compute-0 sudo[118052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:49 compute-0 python3.9[118054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:00:49 compute-0 ceph-mon[75251]: 10.14 scrub starts
Jan 31 06:00:49 compute-0 ceph-mon[75251]: 10.14 scrub ok
Jan 31 06:00:49 compute-0 systemd[1]: Reloading.
Jan 31 06:00:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:49 compute-0 systemd-rc-local-generator[118076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:00:49 compute-0 systemd-sysv-generator[118079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:00:49 compute-0 sudo[118052]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:49 compute-0 sudo[118240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpoinzurdgrdbxzvwmsdigraavlfxlsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839249.7022498-140-136083527091928/AnsiballZ_stat.py'
Jan 31 06:00:49 compute-0 sudo[118240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:50 compute-0 python3.9[118242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:50 compute-0 sudo[118240]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:50 compute-0 ceph-mon[75251]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:50 compute-0 sudo[118318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmatvhwmerspuwndryvphpktnwvcgha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839249.7022498-140-136083527091928/AnsiballZ_file.py'
Jan 31 06:00:50 compute-0 sudo[118318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:50 compute-0 python3.9[118320]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:50 compute-0 sudo[118318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:50 compute-0 sudo[118470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqtpkpwgwthajfkzmfpqeqwolyvkfhce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839250.672438-152-155481828353640/AnsiballZ_stat.py'
Jan 31 06:00:50 compute-0 sudo[118470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:51 compute-0 python3.9[118472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:51 compute-0 sudo[118470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:51 compute-0 sudo[118549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwpmxwilihoszfedcfgwmvazqigtsxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839250.672438-152-155481828353640/AnsiballZ_file.py'
Jan 31 06:00:51 compute-0 sudo[118549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:51 compute-0 python3.9[118551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:51 compute-0 sudo[118549]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:51 compute-0 sudo[118701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pelpfesxbvthrouniphtjqdynqrcypov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839251.6562479-164-213960742590219/AnsiballZ_systemd.py'
Jan 31 06:00:51 compute-0 sudo[118701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:52 compute-0 python3.9[118703]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:00:52 compute-0 systemd[1]: Reloading.
Jan 31 06:00:52 compute-0 systemd-sysv-generator[118735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:00:52 compute-0 systemd-rc-local-generator[118732]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:00:52 compute-0 ceph-mon[75251]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:52 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 06:00:52 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:00:52 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:00:52 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 06:00:52 compute-0 sudo[118701]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:53 compute-0 python3.9[118895]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:00:53 compute-0 network[118912]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:00:53 compute-0 network[118913]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:00:53 compute-0 network[118914]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:00:53 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 06:00:53 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 06:00:54 compute-0 ceph-mon[75251]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:54 compute-0 ceph-mon[75251]: 9.15 scrub starts
Jan 31 06:00:54 compute-0 ceph-mon[75251]: 9.15 scrub ok
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:00:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:00:55 compute-0 ceph-mon[75251]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:55 compute-0 sudo[119174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvjlgjgisavgvasbdzlwnmhyfwnofix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839255.682891-190-202812661932902/AnsiballZ_stat.py'
Jan 31 06:00:55 compute-0 sudo[119174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:56 compute-0 python3.9[119176]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:56 compute-0 sudo[119174]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:56 compute-0 sudo[119252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgshwzjbljjkaznfzbtbjbibdhewtvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839255.682891-190-202812661932902/AnsiballZ_file.py'
Jan 31 06:00:56 compute-0 sudo[119252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:56 compute-0 python3.9[119254]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:56 compute-0 sudo[119252]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:57 compute-0 sudo[119404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkipzqaklpoekmlsmdpdhckxahlqzlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839256.8303761-203-224961338796803/AnsiballZ_file.py'
Jan 31 06:00:57 compute-0 sudo[119404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:57 compute-0 python3.9[119406]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:57 compute-0 sudo[119404]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:57 compute-0 sudo[119556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbqwmprekfdmisndstqdfdyiojjalaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839257.3928468-211-78478241005791/AnsiballZ_stat.py'
Jan 31 06:00:57 compute-0 sudo[119556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:57 compute-0 python3.9[119558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:00:57 compute-0 sudo[119556]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:58 compute-0 sudo[119634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbahtnfbzmwspbbrjrnrzboettiqeglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839257.3928468-211-78478241005791/AnsiballZ_file.py'
Jan 31 06:00:58 compute-0 sudo[119634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:00:58 compute-0 python3.9[119636]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:58 compute-0 sudo[119634]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:58 compute-0 ceph-mon[75251]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:58 compute-0 sudo[119786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeddzzscjyvjqawwplpbqekvvkcmtva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839258.485725-226-80440401704035/AnsiballZ_timezone.py'
Jan 31 06:00:58 compute-0 sudo[119786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:59 compute-0 python3.9[119788]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 06:00:59 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 06:00:59 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 06:00:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:00:59 compute-0 sudo[119786]: pam_unix(sudo:session): session closed for user root
Jan 31 06:00:59 compute-0 sudo[119942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzjtdhrqlbrefcjcxhducxxlbtqminu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839259.4766524-235-96228530173159/AnsiballZ_file.py'
Jan 31 06:00:59 compute-0 sudo[119942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:00:59 compute-0 python3.9[119944]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:00:59 compute-0 sudo[119942]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:00 compute-0 sudo[120094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrialhfsvuljqopijdqbidpbegkjrzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839260.037717-243-179556557335730/AnsiballZ_stat.py'
Jan 31 06:01:00 compute-0 sudo[120094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:00 compute-0 ceph-mon[75251]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:01 compute-0 python3.9[120096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:01 compute-0 sudo[120094]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:01 compute-0 sudo[120172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szokeodxchdsgdljutumlysnjbikspjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839260.037717-243-179556557335730/AnsiballZ_file.py'
Jan 31 06:01:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:01 compute-0 sudo[120172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:01 compute-0 python3.9[120174]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:01 compute-0 sudo[120172]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:01 compute-0 CROND[120224]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 06:01:01 compute-0 run-parts[120232]: (/etc/cron.hourly) starting 0anacron
Jan 31 06:01:01 compute-0 anacron[120286]: Anacron started on 2026-01-31
Jan 31 06:01:01 compute-0 anacron[120286]: Will run job `cron.daily' in 27 min.
Jan 31 06:01:01 compute-0 anacron[120286]: Will run job `cron.weekly' in 47 min.
Jan 31 06:01:01 compute-0 anacron[120286]: Will run job `cron.monthly' in 67 min.
Jan 31 06:01:01 compute-0 anacron[120286]: Jobs will be executed sequentially
Jan 31 06:01:01 compute-0 run-parts[120288]: (/etc/cron.hourly) finished 0anacron
Jan 31 06:01:01 compute-0 CROND[120222]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 06:01:01 compute-0 sudo[120339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbumwoemhzertepgmvghekvprsuwhdmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839261.5816019-255-146629476438901/AnsiballZ_stat.py'
Jan 31 06:01:01 compute-0 sudo[120339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:01 compute-0 python3.9[120341]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:01 compute-0 sudo[120339]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:02 compute-0 ceph-mon[75251]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:02 compute-0 sudo[120417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vekntqabslfwkublzjjlaikcoyosroar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839261.5816019-255-146629476438901/AnsiballZ_file.py'
Jan 31 06:01:02 compute-0 sudo[120417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:02 compute-0 python3.9[120419]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.a5io_mbq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:02 compute-0 sudo[120417]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:02 compute-0 sudo[120569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwkfzuidybcnwyteqdgohsgewnyicufo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839262.4975166-267-28559489110131/AnsiballZ_stat.py'
Jan 31 06:01:02 compute-0 sudo[120569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:04 compute-0 python3.9[120571]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:04 compute-0 sudo[120569]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:04 compute-0 ceph-mon[75251]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:04 compute-0 sudo[120647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuumciggmavyhpnokutumhrmgebsqvoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839262.4975166-267-28559489110131/AnsiballZ_file.py'
Jan 31 06:01:04 compute-0 sudo[120647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:04 compute-0 python3.9[120649]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:04 compute-0 sudo[120647]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:05 compute-0 sudo[120799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwxyakmeuisngaismffnsinnpklrtnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839265.0885866-280-119555263952209/AnsiballZ_command.py'
Jan 31 06:01:05 compute-0 sudo[120799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:05 compute-0 python3.9[120801]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:05 compute-0 sudo[120799]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:05 compute-0 ceph-mon[75251]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:06 compute-0 sudo[120952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiaulmbswspaippuirffordubquczmpd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839265.8286898-288-225710935210981/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:01:06 compute-0 sudo[120952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:06 compute-0 python3[120954]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:01:06 compute-0 sudo[120952]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:06 compute-0 sudo[121104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkwqhcbjfwphicdsanjquoowjfztlza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839266.6198192-296-191205456541296/AnsiballZ_stat.py'
Jan 31 06:01:06 compute-0 sudo[121104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:07 compute-0 python3.9[121106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:07 compute-0 sudo[121104]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:07 compute-0 sudo[121182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohjmxwgtksqcwpdtmpsjctnhjtaerjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839266.6198192-296-191205456541296/AnsiballZ_file.py'
Jan 31 06:01:07 compute-0 sudo[121182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:07 compute-0 python3.9[121184]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:07 compute-0 sudo[121182]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:08 compute-0 sudo[121334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgaqofvpoehieusgvejcrycdssrdweyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839267.9127433-308-170499979806668/AnsiballZ_stat.py'
Jan 31 06:01:08 compute-0 sudo[121334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:08 compute-0 ceph-mon[75251]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:08 compute-0 python3.9[121336]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:08 compute-0 sudo[121334]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:08 compute-0 sudo[121459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybeezajszhysglbpwqxxdzrywzdnkuoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839267.9127433-308-170499979806668/AnsiballZ_copy.py'
Jan 31 06:01:08 compute-0 sudo[121459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:09 compute-0 python3.9[121461]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839267.9127433-308-170499979806668/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:09 compute-0 sudo[121459]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:09 compute-0 sudo[121611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlulukfzdoibiqavzgjqqkrzrkpnsfhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839269.3732944-323-222131202290148/AnsiballZ_stat.py'
Jan 31 06:01:09 compute-0 sudo[121611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:09 compute-0 python3.9[121613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:09 compute-0 sudo[121611]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:10 compute-0 sudo[121689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmnnbkosyqdmlpgzyxtaxoxzpgtijmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839269.3732944-323-222131202290148/AnsiballZ_file.py'
Jan 31 06:01:10 compute-0 sudo[121689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:10 compute-0 python3.9[121691]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:10 compute-0 sudo[121689]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:10 compute-0 ceph-mon[75251]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:10 compute-0 sudo[121841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmwxoryqyuuervkfcacjwhahhxxcybgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839270.4428742-335-134767590591760/AnsiballZ_stat.py'
Jan 31 06:01:10 compute-0 sudo[121841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:10 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 06:01:10 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 06:01:10 compute-0 python3.9[121843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:10 compute-0 sudo[121841]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:11 compute-0 sudo[121919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsnctyirxdlwzjxizimahzyuzdumuvwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839270.4428742-335-134767590591760/AnsiballZ_file.py'
Jan 31 06:01:11 compute-0 sudo[121919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:11 compute-0 python3.9[121921]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:11 compute-0 sudo[121919]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:11 compute-0 ceph-mon[75251]: 9.14 scrub starts
Jan 31 06:01:11 compute-0 ceph-mon[75251]: 9.14 scrub ok
Jan 31 06:01:11 compute-0 sudo[122071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilzyvfkaxaglrwwkbsesbucklqoyqlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839271.531563-347-149392928469970/AnsiballZ_stat.py'
Jan 31 06:01:11 compute-0 sudo[122071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:11 compute-0 python3.9[122073]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:12 compute-0 sudo[122071]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:12 compute-0 sudo[122149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utytgtedtcwueyyjlvtqitjxwdhyycqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839271.531563-347-149392928469970/AnsiballZ_file.py'
Jan 31 06:01:12 compute-0 sudo[122149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:12 compute-0 python3.9[122151]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:12 compute-0 sudo[122149]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:12 compute-0 ceph-mon[75251]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:12 compute-0 sudo[122301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozapkbrgqlzeydwbqpwrjmbdhawzwwvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839272.6968439-360-258557900119911/AnsiballZ_command.py'
Jan 31 06:01:12 compute-0 sudo[122301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:13 compute-0 python3.9[122303]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:13 compute-0 sudo[122301]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 06:01:13 compute-0 sudo[122456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trikajfgzuyodjhbdbntenfrnpdtzyep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839273.3055212-368-120599970776977/AnsiballZ_blockinfile.py'
Jan 31 06:01:13 compute-0 sudo[122456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:13 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 06:01:13 compute-0 ceph-mon[75251]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:13 compute-0 python3.9[122458]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:13 compute-0 sudo[122456]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:14 compute-0 sudo[122608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmkchamhvsalymdrnsifyampeidcnjjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839274.2204375-377-97935457785408/AnsiballZ_file.py'
Jan 31 06:01:14 compute-0 sudo[122608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:14 compute-0 python3.9[122610]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:14 compute-0 sudo[122608]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:14 compute-0 ceph-mon[75251]: 9.2 scrub starts
Jan 31 06:01:14 compute-0 ceph-mon[75251]: 9.2 scrub ok
Jan 31 06:01:15 compute-0 sudo[122760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxgctobzqkkuixrrqnuoqqdyrzaopfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839274.8220932-377-84542478371227/AnsiballZ_file.py'
Jan 31 06:01:15 compute-0 sudo[122760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:15 compute-0 python3.9[122762]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:15 compute-0 sudo[122760]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:15 compute-0 sudo[122912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxcvlqplxmofoumbghbtdsxbvyjscmji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839275.4344823-392-241471878599419/AnsiballZ_mount.py'
Jan 31 06:01:15 compute-0 sudo[122912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:15 compute-0 ceph-mon[75251]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:16 compute-0 python3.9[122914]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:01:16 compute-0 sudo[122912]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:16 compute-0 sudo[123064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldotmycxwpyhbmesifhadrvzclnzogrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839276.202902-392-235376489847189/AnsiballZ_mount.py'
Jan 31 06:01:16 compute-0 sudo[123064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:16 compute-0 python3.9[123066]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 06:01:16 compute-0 sudo[123064]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:17 compute-0 sshd-session[115888]: Connection closed by 192.168.122.30 port 44984
Jan 31 06:01:17 compute-0 sshd-session[115885]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:01:17 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 06:01:17 compute-0 systemd[1]: session-39.scope: Consumed 24.128s CPU time.
Jan 31 06:01:17 compute-0 systemd-logind[797]: Session 39 logged out. Waiting for processes to exit.
Jan 31 06:01:17 compute-0 systemd-logind[797]: Removed session 39.
Jan 31 06:01:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:18 compute-0 ceph-mon[75251]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:20 compute-0 ceph-mon[75251]: pgmap v351: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:21 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 06:01:21 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 06:01:22 compute-0 ceph-mon[75251]: pgmap v352: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:22 compute-0 ceph-mon[75251]: 9.0 scrub starts
Jan 31 06:01:22 compute-0 ceph-mon[75251]: 9.0 scrub ok
Jan 31 06:01:22 compute-0 sshd-session[123092]: Accepted publickey for zuul from 192.168.122.30 port 60978 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:01:22 compute-0 systemd-logind[797]: New session 40 of user zuul.
Jan 31 06:01:22 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 31 06:01:22 compute-0 sshd-session[123092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:01:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 06:01:22 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 06:01:22 compute-0 sudo[123245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhqufksconmunhaiyvbclgaxsenpxiwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839282.56111-16-73484033783602/AnsiballZ_tempfile.py'
Jan 31 06:01:23 compute-0 sudo[123245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:23 compute-0 python3.9[123247]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 06:01:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:23 compute-0 sudo[123245]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:23 compute-0 ceph-mon[75251]: 9.a scrub starts
Jan 31 06:01:23 compute-0 ceph-mon[75251]: 9.a scrub ok
Jan 31 06:01:23 compute-0 sudo[123397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxmhdeamvtlbxltmxhdpoqcjhnkwjkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839283.3664546-28-184802165010687/AnsiballZ_stat.py'
Jan 31 06:01:23 compute-0 sudo[123397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 06:01:23 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 06:01:23 compute-0 python3.9[123399]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:01:23 compute-0 sudo[123397]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:24 compute-0 ceph-mon[75251]: pgmap v353: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:24 compute-0 ceph-mon[75251]: 9.4 scrub starts
Jan 31 06:01:24 compute-0 ceph-mon[75251]: 9.4 scrub ok
Jan 31 06:01:24 compute-0 sudo[123551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqdpayjojjzeamptozccywzcfgaejuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839284.0850987-36-215515053937168/AnsiballZ_slurp.py'
Jan 31 06:01:24 compute-0 sudo[123551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:24 compute-0 python3.9[123553]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 06:01:24 compute-0 sudo[123551]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:25 compute-0 sudo[123703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnpeuuozlwdtdouquqstcauykcnijutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839284.7850714-44-46540373261488/AnsiballZ_stat.py'
Jan 31 06:01:25 compute-0 sudo[123703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:25 compute-0 python3.9[123705]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible._gdg5cb8 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:01:25 compute-0 sudo[123703]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:25 compute-0 sudo[123828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-johtcszrjrwwgmkyjjofxztuwprpseir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839284.7850714-44-46540373261488/AnsiballZ_copy.py'
Jan 31 06:01:25 compute-0 sudo[123828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:25 compute-0 python3.9[123830]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible._gdg5cb8 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839284.7850714-44-46540373261488/.source._gdg5cb8 _original_basename=.7bid81nq follow=False checksum=f058cf102b71e7525b9be498bc4c8b39dc1eea88 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:25 compute-0 sudo[123828]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:26 compute-0 ceph-mon[75251]: pgmap v354: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:26 compute-0 sudo[123980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrcoechztramnsepoimmhoehnhsygvdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839286.146295-59-5678269551851/AnsiballZ_setup.py'
Jan 31 06:01:26 compute-0 sudo[123980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:26 compute-0 python3.9[123982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:01:26 compute-0 sudo[123980]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:27 compute-0 sudo[124132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgyixdrzeaqhmofkzcnghlthzjzzcfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839287.1433876-68-99179488086376/AnsiballZ_blockinfile.py'
Jan 31 06:01:27 compute-0 sudo[124132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:27 compute-0 python3.9[124134]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0hYuRiON/npbsK3YxZQ0e0GXxHvRZX51KfYqA1GKj0gQ2C3H68tgNEbNyr0sftfDPuhYj51H0/ArHvFJ19lm6yn/wR3usRFJekl2qXu9gaIBXIgezD8brSkp872zSISy5AqDV8I4WgjoqXF0YuowEtqDGnj5xTi5pyh8qVeV2Y500OBmmCqYA/n4SGP02fF2Lho3j2MIWLe8oJ7/JBkYmpjsHeKUMD+7iv0LDla/fEYTiq9gjci/Lo8O+t31VKVNjRntj/p8Wo+0uPzfw3dePHKFRC1sg+aMG940YVUyRsDKiHOCZrditEGnrBcLep2TyDO4GzaAE6Tg1D6qLztki2H45FAhYE1dIxodEi8bdo6wH1Ss8vIdez8pkFlW6FTObkLxh00QwTolJ+rMZkmuerAkfYFh8HuEmSa85VCdGrRwosjOAQIlJv4ONNSo4xwyI0/Ckvw80IWv722q4aSUzN06SLnHK5RtyPrGKBhYX1zbKPzTysGB7oaZU+/jzVW0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPah6YYl2J1cbbMzkFDMKRbiSHoV+FPnQcDnTDMFvGI
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLg2p/KWi5iAoiT5fV6/vO4iRSCLXzVhh5LWjjpqbjNRY1tST3/3JaBg2W+zT/5ijaf/FaSVjU4iMjimCFU3BTU=
                                              create=True mode=0644 path=/tmp/ansible._gdg5cb8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 06:01:27 compute-0 sudo[124132]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:27 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 06:01:28 compute-0 sudo[124284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgahhaixxrwtkitofmteyeaqrgcuzned ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839287.785817-76-188878009769558/AnsiballZ_command.py'
Jan 31 06:01:28 compute-0 sudo[124284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:28 compute-0 python3.9[124286]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._gdg5cb8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:28 compute-0 sudo[124284]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:28 compute-0 ceph-mon[75251]: pgmap v355: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:28 compute-0 ceph-mon[75251]: 9.1a scrub starts
Jan 31 06:01:28 compute-0 ceph-mon[75251]: 9.1a scrub ok
Jan 31 06:01:28 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 06:01:28 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 06:01:28 compute-0 sudo[124438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpvhzhghemcoanxvyddlzzbhcwfiixa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839288.4418585-84-269574966101871/AnsiballZ_file.py'
Jan 31 06:01:28 compute-0 sudo[124438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:29 compute-0 python3.9[124440]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._gdg5cb8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:29 compute-0 sudo[124438]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 06:01:29 compute-0 sshd-session[123095]: Connection closed by 192.168.122.30 port 60978
Jan 31 06:01:29 compute-0 sshd-session[123092]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:01:29 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 06:01:29 compute-0 systemd[1]: session-40.scope: Consumed 3.957s CPU time.
Jan 31 06:01:29 compute-0 systemd-logind[797]: Session 40 logged out. Waiting for processes to exit.
Jan 31 06:01:29 compute-0 systemd-logind[797]: Removed session 40.
Jan 31 06:01:29 compute-0 ceph-mon[75251]: 9.10 scrub starts
Jan 31 06:01:29 compute-0 ceph-mon[75251]: 9.10 scrub ok
Jan 31 06:01:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 06:01:29 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 06:01:30 compute-0 ceph-mon[75251]: pgmap v356: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:30 compute-0 ceph-mon[75251]: 9.12 scrub starts
Jan 31 06:01:30 compute-0 ceph-mon[75251]: 9.12 scrub ok
Jan 31 06:01:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 06:01:30 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 06:01:30 compute-0 sudo[124468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:01:30 compute-0 sudo[124468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:30 compute-0 sudo[124468]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:30 compute-0 sudo[124493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:01:30 compute-0 sudo[124493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:31 compute-0 sudo[124493]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:01:31 compute-0 sudo[124549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:01:31 compute-0 sudo[124549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:31 compute-0 sudo[124549]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:31 compute-0 sudo[124574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:01:31 compute-0 sudo[124574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:31 compute-0 ceph-mon[75251]: 9.1f scrub starts
Jan 31 06:01:31 compute-0 ceph-mon[75251]: 9.1f scrub ok
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:01:31 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.739972578 +0000 UTC m=+0.080116012 container create 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.678439242 +0000 UTC m=+0.018582696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:31 compute-0 systemd[1]: Started libpod-conmon-4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1.scope.
Jan 31 06:01:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.82623489 +0000 UTC m=+0.166378394 container init 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.837439831 +0000 UTC m=+0.177583305 container start 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.841978667 +0000 UTC m=+0.182122111 container attach 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:01:31 compute-0 competent_mclaren[124627]: 167 167
Jan 31 06:01:31 compute-0 systemd[1]: libpod-4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1.scope: Deactivated successfully.
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.844322802 +0000 UTC m=+0.184466246 container died 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e45e659250e4fe2b718d1204931a71595b3cfc17af82310c49e7399cc328b9ee-merged.mount: Deactivated successfully.
Jan 31 06:01:31 compute-0 podman[124611]: 2026-01-31 06:01:31.949887298 +0000 UTC m=+0.290030762 container remove 4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:01:31 compute-0 systemd[1]: libpod-conmon-4ca80f39248ea72eaae63f798adaeb111b5630fece82d0ff2b6432e4f0a327a1.scope: Deactivated successfully.
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.11043679 +0000 UTC m=+0.091001944 container create efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.040796079 +0000 UTC m=+0.021361243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:32 compute-0 systemd[1]: Started libpod-conmon-efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898.scope.
Jan 31 06:01:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.300734415 +0000 UTC m=+0.281299589 container init efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.315399292 +0000 UTC m=+0.295964446 container start efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.324543935 +0000 UTC m=+0.305113440 container attach efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:01:32 compute-0 agitated_kilby[124668]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:01:32 compute-0 agitated_kilby[124668]: --> All data devices are unavailable
Jan 31 06:01:32 compute-0 systemd[1]: libpod-efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898.scope: Deactivated successfully.
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.697501034 +0000 UTC m=+0.678066188 container died efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:01:32 compute-0 ceph-mon[75251]: pgmap v357: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c6f002517c73ce180a780d4adac760205a65f2c767c2d371723a50021358412-merged.mount: Deactivated successfully.
Jan 31 06:01:32 compute-0 podman[124651]: 2026-01-31 06:01:32.810012304 +0000 UTC m=+0.790577458 container remove efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 06:01:32 compute-0 systemd[1]: libpod-conmon-efde93f7295d9a599f7904497623464081cdc68c1d06e4e9e0e03a6c77e8e898.scope: Deactivated successfully.
Jan 31 06:01:32 compute-0 sudo[124574]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:32 compute-0 sudo[124702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:01:32 compute-0 sudo[124702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:32 compute-0 sudo[124702]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:32 compute-0 sudo[124727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:01:32 compute-0 sudo[124727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.200244853 +0000 UTC m=+0.042479949 container create 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:01:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:33 compute-0 systemd[1]: Started libpod-conmon-0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0.scope.
Jan 31 06:01:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.177524113 +0000 UTC m=+0.019759219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.285618259 +0000 UTC m=+0.127853435 container init 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.293498508 +0000 UTC m=+0.135733594 container start 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:01:33 compute-0 affectionate_buck[124780]: 167 167
Jan 31 06:01:33 compute-0 systemd[1]: libpod-0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0.scope: Deactivated successfully.
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.303802394 +0000 UTC m=+0.146037520 container attach 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.304341599 +0000 UTC m=+0.146576725 container died 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-71ae518bbbfcda1b9c584388c6567939f1bdb5b3ce219a6b9514ad017659bcba-merged.mount: Deactivated successfully.
Jan 31 06:01:33 compute-0 podman[124763]: 2026-01-31 06:01:33.374684889 +0000 UTC m=+0.216919985 container remove 0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:01:33 compute-0 systemd[1]: libpod-conmon-0f83bf9ff26361a6d2d0944d0c97be8b2b6db5fba2ab63190724220abc2625f0.scope: Deactivated successfully.
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.497049621 +0000 UTC m=+0.051478708 container create 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 06:01:33 compute-0 systemd[1]: Started libpod-conmon-67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9.scope.
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.464287113 +0000 UTC m=+0.018716220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e1bf443cb07f8d3540fec86da844e7ec1504e9a561df1c1062e9138d4e102a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e1bf443cb07f8d3540fec86da844e7ec1504e9a561df1c1062e9138d4e102a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e1bf443cb07f8d3540fec86da844e7ec1504e9a561df1c1062e9138d4e102a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e1bf443cb07f8d3540fec86da844e7ec1504e9a561df1c1062e9138d4e102a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.606091744 +0000 UTC m=+0.160520831 container init 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.613969713 +0000 UTC m=+0.168398790 container start 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.668427152 +0000 UTC m=+0.222856259 container attach 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:01:33 compute-0 ceph-mon[75251]: pgmap v358: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:33 compute-0 great_feistel[124822]: {
Jan 31 06:01:33 compute-0 great_feistel[124822]:     "0": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:         {
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "devices": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "/dev/loop3"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             ],
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_name": "ceph_lv0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_size": "21470642176",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "name": "ceph_lv0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "tags": {
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_name": "ceph",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.crush_device_class": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.encrypted": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.objectstore": "bluestore",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_id": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.vdo": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.with_tpm": "0"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             },
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "vg_name": "ceph_vg0"
Jan 31 06:01:33 compute-0 great_feistel[124822]:         }
Jan 31 06:01:33 compute-0 great_feistel[124822]:     ],
Jan 31 06:01:33 compute-0 great_feistel[124822]:     "1": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:         {
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "devices": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "/dev/loop4"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             ],
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_name": "ceph_lv1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_size": "21470642176",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "name": "ceph_lv1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "tags": {
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_name": "ceph",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.crush_device_class": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.encrypted": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.objectstore": "bluestore",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_id": "1",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.vdo": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.with_tpm": "0"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             },
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "vg_name": "ceph_vg1"
Jan 31 06:01:33 compute-0 great_feistel[124822]:         }
Jan 31 06:01:33 compute-0 great_feistel[124822]:     ],
Jan 31 06:01:33 compute-0 great_feistel[124822]:     "2": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:         {
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "devices": [
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "/dev/loop5"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             ],
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_name": "ceph_lv2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_size": "21470642176",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "name": "ceph_lv2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "tags": {
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.cluster_name": "ceph",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.crush_device_class": "",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.encrypted": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.objectstore": "bluestore",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osd_id": "2",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.vdo": "0",
Jan 31 06:01:33 compute-0 great_feistel[124822]:                 "ceph.with_tpm": "0"
Jan 31 06:01:33 compute-0 great_feistel[124822]:             },
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "type": "block",
Jan 31 06:01:33 compute-0 great_feistel[124822]:             "vg_name": "ceph_vg2"
Jan 31 06:01:33 compute-0 great_feistel[124822]:         }
Jan 31 06:01:33 compute-0 great_feistel[124822]:     ]
Jan 31 06:01:33 compute-0 great_feistel[124822]: }
Jan 31 06:01:33 compute-0 systemd[1]: libpod-67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9.scope: Deactivated successfully.
Jan 31 06:01:33 compute-0 podman[124805]: 2026-01-31 06:01:33.892779762 +0000 UTC m=+0.447208889 container died 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-89e1bf443cb07f8d3540fec86da844e7ec1504e9a561df1c1062e9138d4e102a-merged.mount: Deactivated successfully.
Jan 31 06:01:34 compute-0 podman[124805]: 2026-01-31 06:01:34.047977355 +0000 UTC m=+0.602406472 container remove 67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:01:34 compute-0 systemd[1]: libpod-conmon-67a4a5983c331b608ea16224b36694f08c4dbe9a106c80a59e8a6375822e23a9.scope: Deactivated successfully.
Jan 31 06:01:34 compute-0 sudo[124727]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:34 compute-0 sudo[124845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:01:34 compute-0 sudo[124845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:34 compute-0 sudo[124845]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:34 compute-0 sudo[124870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:01:34 compute-0 sudo[124870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.581758764 +0000 UTC m=+0.046483240 container create 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:01:34 compute-0 systemd[1]: Started libpod-conmon-0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655.scope.
Jan 31 06:01:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.558874499 +0000 UTC m=+0.023599015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.665468295 +0000 UTC m=+0.130192791 container init 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.675572255 +0000 UTC m=+0.140296711 container start 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.679311028 +0000 UTC m=+0.144035524 container attach 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:01:34 compute-0 crazy_nobel[124922]: 167 167
Jan 31 06:01:34 compute-0 systemd[1]: libpod-0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655.scope: Deactivated successfully.
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.684389259 +0000 UTC m=+0.149113765 container died 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e1c47c36cd7c08b8245d306461d0230b28ec89ae5977d8c86ca4949ee96068-merged.mount: Deactivated successfully.
Jan 31 06:01:34 compute-0 podman[124906]: 2026-01-31 06:01:34.775075303 +0000 UTC m=+0.239799779 container remove 0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 06:01:34 compute-0 systemd[1]: libpod-conmon-0ca734838bbe795b4c44ab475728daa4d9a08dd79e6a6ff3a19ee24d97fb8655.scope: Deactivated successfully.
Jan 31 06:01:34 compute-0 sshd-session[124934]: Accepted publickey for zuul from 192.168.122.30 port 40326 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:01:34 compute-0 systemd-logind[797]: New session 41 of user zuul.
Jan 31 06:01:34 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 31 06:01:34 compute-0 sshd-session[124934]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:01:35 compute-0 podman[124951]: 2026-01-31 06:01:34.904573384 +0000 UTC m=+0.024257264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:01:35 compute-0 podman[124951]: 2026-01-31 06:01:35.125523129 +0000 UTC m=+0.245207009 container create 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 06:01:35 compute-0 systemd[1]: Started libpod-conmon-4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff.scope.
Jan 31 06:01:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d18b452843f342ac46899811cdd77da52afe04ec54e222519a691a4392039c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d18b452843f342ac46899811cdd77da52afe04ec54e222519a691a4392039c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d18b452843f342ac46899811cdd77da52afe04ec54e222519a691a4392039c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d18b452843f342ac46899811cdd77da52afe04ec54e222519a691a4392039c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:01:35 compute-0 podman[124951]: 2026-01-31 06:01:35.396529802 +0000 UTC m=+0.516213692 container init 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:01:35 compute-0 podman[124951]: 2026-01-31 06:01:35.405992605 +0000 UTC m=+0.525676455 container start 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:01:35 compute-0 podman[124951]: 2026-01-31 06:01:35.414644365 +0000 UTC m=+0.534328255 container attach 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:01:35 compute-0 python3.9[125122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:01:35 compute-0 lvm[125201]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:01:35 compute-0 lvm[125201]: VG ceph_vg1 finished
Jan 31 06:01:35 compute-0 lvm[125198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:01:35 compute-0 lvm[125198]: VG ceph_vg0 finished
Jan 31 06:01:35 compute-0 lvm[125203]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:01:35 compute-0 lvm[125203]: VG ceph_vg2 finished
Jan 31 06:01:36 compute-0 busy_dhawan[125022]: {}
Jan 31 06:01:36 compute-0 systemd[1]: libpod-4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff.scope: Deactivated successfully.
Jan 31 06:01:36 compute-0 podman[124951]: 2026-01-31 06:01:36.095928662 +0000 UTC m=+1.215612512 container died 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d18b452843f342ac46899811cdd77da52afe04ec54e222519a691a4392039c1-merged.mount: Deactivated successfully.
Jan 31 06:01:36 compute-0 podman[124951]: 2026-01-31 06:01:36.161788208 +0000 UTC m=+1.281472058 container remove 4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:01:36 compute-0 systemd[1]: libpod-conmon-4d674a3c222c1849f0185a62f330147916fd649e9eb2ac1a4596b3424859cbff.scope: Deactivated successfully.
Jan 31 06:01:36 compute-0 sudo[124870]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:01:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:01:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:36 compute-0 sudo[125294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:01:36 compute-0 sudo[125294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:01:36 compute-0 sudo[125294]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:36 compute-0 ceph-mon[75251]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:01:36 compute-0 sudo[125392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grputdzcokymahpreyochdfjbnwnbdwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839296.0913877-27-231333792149421/AnsiballZ_systemd.py'
Jan 31 06:01:36 compute-0 sudo[125392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:36 compute-0 python3.9[125394]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 06:01:36 compute-0 sudo[125392]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:37 compute-0 sudo[125546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnojlimhiqbqseheoixyqyrjpdieldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839297.1179395-35-247541687331098/AnsiballZ_systemd.py'
Jan 31 06:01:37 compute-0 sudo[125546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:37 compute-0 python3.9[125548]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:01:37 compute-0 sudo[125546]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:38 compute-0 sudo[125699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcuzlmdaizewsttxtfacekzotfpefopm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839297.8050013-44-102062610387057/AnsiballZ_command.py'
Jan 31 06:01:38 compute-0 sudo[125699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:38 compute-0 python3.9[125701]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:38 compute-0 sudo[125699]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:38 compute-0 ceph-mon[75251]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:38 compute-0 sudo[125852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjkujcshrhyuwcnwmuchqwfdrpmwpcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839298.50846-52-221332476978683/AnsiballZ_stat.py'
Jan 31 06:01:38 compute-0 sudo[125852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:39 compute-0 python3.9[125854]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:01:39 compute-0 sudo[125852]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:39 compute-0 sudo[126004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htlqcqpewosbnqthbmplwgltuwviwxts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839299.2380211-61-176509705521421/AnsiballZ_file.py'
Jan 31 06:01:39 compute-0 sudo[126004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:39 compute-0 python3.9[126006]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:01:39 compute-0 sudo[126004]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:40 compute-0 sshd-session[124945]: Connection closed by 192.168.122.30 port 40326
Jan 31 06:01:40 compute-0 sshd-session[124934]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:01:40 compute-0 systemd-logind[797]: Session 41 logged out. Waiting for processes to exit.
Jan 31 06:01:40 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 06:01:40 compute-0 systemd[1]: session-41.scope: Consumed 3.204s CPU time.
Jan 31 06:01:40 compute-0 systemd-logind[797]: Removed session 41.
Jan 31 06:01:40 compute-0 ceph-mon[75251]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:42 compute-0 ceph-mon[75251]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:01:44
Jan 31 06:01:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:01:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:01:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes']
Jan 31 06:01:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:01:44 compute-0 ceph-mon[75251]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:44 compute-0 sshd-session[126031]: Accepted publickey for zuul from 192.168.122.30 port 49586 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:01:44 compute-0 systemd-logind[797]: New session 42 of user zuul.
Jan 31 06:01:44 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 31 06:01:44 compute-0 sshd-session[126031]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:01:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:01:45 compute-0 python3.9[126184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:01:46 compute-0 sudo[126338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntvrruiqsvgqqhuljxjupbmlkzxmsbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839306.320767-29-230226567397046/AnsiballZ_setup.py'
Jan 31 06:01:46 compute-0 sudo[126338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:46 compute-0 ceph-mon[75251]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:46 compute-0 sshd-session[71396]: Received disconnect from 38.102.83.111 port 45428:11: disconnected by user
Jan 31 06:01:46 compute-0 sshd-session[71396]: Disconnected from user zuul 38.102.83.111 port 45428
Jan 31 06:01:46 compute-0 sshd-session[71393]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:01:46 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 06:01:46 compute-0 systemd[1]: session-17.scope: Consumed 1min 28.239s CPU time.
Jan 31 06:01:46 compute-0 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Jan 31 06:01:46 compute-0 systemd-logind[797]: Removed session 17.
Jan 31 06:01:46 compute-0 python3.9[126340]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:01:47 compute-0 sudo[126338]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:47 compute-0 sudo[126422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubdzwqkhuqjbmvcabzmcdrcuullgcbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839306.320767-29-230226567397046/AnsiballZ_dnf.py'
Jan 31 06:01:47 compute-0 sudo[126422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:01:47 compute-0 python3.9[126424]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 06:01:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:48 compute-0 ceph-mon[75251]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:49 compute-0 sudo[126422]: pam_unix(sudo:session): session closed for user root
Jan 31 06:01:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:49 compute-0 python3.9[126575]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:01:50 compute-0 ceph-mon[75251]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:50 compute-0 python3.9[126726]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:01:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:51 compute-0 python3.9[126876]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:01:52 compute-0 python3.9[127026]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:01:52 compute-0 ceph-mon[75251]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:52 compute-0 sshd-session[126034]: Connection closed by 192.168.122.30 port 49586
Jan 31 06:01:52 compute-0 sshd-session[126031]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:01:52 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 06:01:52 compute-0 systemd[1]: session-42.scope: Consumed 5.126s CPU time.
Jan 31 06:01:52 compute-0 systemd-logind[797]: Session 42 logged out. Waiting for processes to exit.
Jan 31 06:01:52 compute-0 systemd-logind[797]: Removed session 42.
Jan 31 06:01:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:54 compute-0 ceph-mon[75251]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:01:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:01:55 compute-0 ceph-mon[75251]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:58 compute-0 sshd-session[127051]: Accepted publickey for zuul from 192.168.122.30 port 53238 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:01:58 compute-0 systemd-logind[797]: New session 43 of user zuul.
Jan 31 06:01:58 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 31 06:01:58 compute-0 sshd-session[127051]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:01:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:01:58 compute-0 ceph-mon[75251]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:01:59 compute-0 python3.9[127204]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:01:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:00 compute-0 ceph-mon[75251]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:00 compute-0 sudo[127358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdbfzbdwzoihvhuhuxfxephrssmaynz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839320.343509-45-178053738953670/AnsiballZ_file.py'
Jan 31 06:02:00 compute-0 sudo[127358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:01 compute-0 python3.9[127360]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:01 compute-0 sudo[127358]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:01 compute-0 sudo[127510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvtjdevacogwvpgyqxszgxkxwskmtfex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839321.2430189-45-168196757809907/AnsiballZ_file.py'
Jan 31 06:02:01 compute-0 sudo[127510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:01 compute-0 python3.9[127512]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:01 compute-0 sudo[127510]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:02 compute-0 sudo[127662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeaojrhuktepyecspqhrazrjooaprygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839321.8481998-60-270290327696496/AnsiballZ_stat.py'
Jan 31 06:02:02 compute-0 sudo[127662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:02 compute-0 python3.9[127664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:02 compute-0 sudo[127662]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:02 compute-0 ceph-mon[75251]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:03 compute-0 sudo[127785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfewzedgirluqtuprjsfbuxpaecydzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839321.8481998-60-270290327696496/AnsiballZ_copy.py'
Jan 31 06:02:03 compute-0 sudo[127785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:03 compute-0 python3.9[127787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839321.8481998-60-270290327696496/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b47447b6375d51fa78af817fdaccd66e21c68f1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:03 compute-0 sudo[127785]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:03 compute-0 sudo[127937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sissimvulaixariztmlcupfokzpbjgqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839323.5380685-60-108886225489996/AnsiballZ_stat.py'
Jan 31 06:02:03 compute-0 sudo[127937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:04 compute-0 ceph-mon[75251]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:04 compute-0 python3.9[127939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:04 compute-0 sudo[127937]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:04 compute-0 sudo[128060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayjwyxrnfedzzpthnfjoxrmxsnkaioxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839323.5380685-60-108886225489996/AnsiballZ_copy.py'
Jan 31 06:02:04 compute-0 sudo[128060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:05 compute-0 python3.9[128062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839323.5380685-60-108886225489996/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=beed17b5caaa7929e13d6d1389a4476d77c39f31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:05 compute-0 sudo[128060]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:05 compute-0 sudo[128212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxswcohitktjiedhpqqryewbgqcxzmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839325.226807-60-252935117422331/AnsiballZ_stat.py'
Jan 31 06:02:05 compute-0 sudo[128212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:05 compute-0 python3.9[128214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:05 compute-0 sudo[128212]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:06 compute-0 sudo[128335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdctdhcxvubetwbrersnpvpducbxtdri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839325.226807-60-252935117422331/AnsiballZ_copy.py'
Jan 31 06:02:06 compute-0 sudo[128335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:06 compute-0 ceph-mon[75251]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:06 compute-0 python3.9[128337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839325.226807-60-252935117422331/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0670a1b86e74afe76a3edab6bb43d7dbd0515497 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:06 compute-0 sudo[128335]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:06 compute-0 sudo[128487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfhjvimvdryzjdsrkibsjbhkkbxwavda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839326.4368513-104-7377955986831/AnsiballZ_file.py'
Jan 31 06:02:06 compute-0 sudo[128487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:06 compute-0 python3.9[128489]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:06 compute-0 sudo[128487]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:07 compute-0 sudo[128639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqqesdonlejyzkdcapyzrguvzjpsrqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839327.0315711-104-38280900555636/AnsiballZ_file.py'
Jan 31 06:02:07 compute-0 sudo[128639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:07 compute-0 python3.9[128641]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:07 compute-0 sudo[128639]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:07 compute-0 sudo[128791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahjgdylzhtyymkhufbbzhqtdympgsraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839327.778869-119-113923427948345/AnsiballZ_stat.py'
Jan 31 06:02:07 compute-0 sudo[128791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:08 compute-0 python3.9[128793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:08 compute-0 sudo[128791]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:08 compute-0 sudo[128914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgfsmgiyvhethqsdzknebksgwuajlskm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839327.778869-119-113923427948345/AnsiballZ_copy.py'
Jan 31 06:02:08 compute-0 sudo[128914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:08 compute-0 ceph-mon[75251]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:08 compute-0 python3.9[128916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839327.778869-119-113923427948345/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0c9349ea5b3a371d592333a652a9b9ed81bfd80d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:08 compute-0 sudo[128914]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:09 compute-0 sudo[129066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svnuefuwwgyccqhhupioeukjfulbtqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839329.0102212-119-64383615618311/AnsiballZ_stat.py'
Jan 31 06:02:09 compute-0 sudo[129066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:09 compute-0 python3.9[129068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:09 compute-0 sudo[129066]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:09 compute-0 sudo[129189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobenmquvrwdbjwdqoinaxsfjqrvvcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839329.0102212-119-64383615618311/AnsiballZ_copy.py'
Jan 31 06:02:09 compute-0 sudo[129189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:09 compute-0 python3.9[129191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839329.0102212-119-64383615618311/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7cf309fd506e4d34d91313ca01e49c61ca88ebf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:09 compute-0 sudo[129189]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:09 compute-0 ceph-mon[75251]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:10 compute-0 sudo[129341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfolygfqsrcjbnjchlgjgcyoavelfggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839330.051111-119-265618755847592/AnsiballZ_stat.py'
Jan 31 06:02:10 compute-0 sudo[129341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:10 compute-0 python3.9[129343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:10 compute-0 sudo[129341]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:10 compute-0 sudo[129464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmepigopzvzmagntpagccqsthclrdrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839330.051111-119-265618755847592/AnsiballZ_copy.py'
Jan 31 06:02:10 compute-0 sudo[129464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:11 compute-0 python3.9[129466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839330.051111-119-265618755847592/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1a2f19092d04804ff54f38dc2a9a4cb77ca68fda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:11 compute-0 sudo[129464]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:11 compute-0 sudo[129616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxbogebmbrkynjlwabccfqvfjkmlklv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839331.3235288-163-34712843982920/AnsiballZ_file.py'
Jan 31 06:02:11 compute-0 sudo[129616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:11 compute-0 python3.9[129618]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:11 compute-0 sudo[129616]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:12 compute-0 sudo[129768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlihviysbyqmspfyhkioascqqdbadxsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839331.903759-163-39140348743910/AnsiballZ_file.py'
Jan 31 06:02:12 compute-0 sudo[129768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:12 compute-0 python3.9[129770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:12 compute-0 sudo[129768]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:12 compute-0 ceph-mon[75251]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:12 compute-0 sudo[129920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdulfucelxlropkqatrhaabzcdegwdis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839332.620998-178-158670128561000/AnsiballZ_stat.py'
Jan 31 06:02:12 compute-0 sudo[129920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:13 compute-0 python3.9[129922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:13 compute-0 sudo[129920]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:13 compute-0 sudo[130043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opgohesvakdhmetfobtyabgskpwubbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839332.620998-178-158670128561000/AnsiballZ_copy.py'
Jan 31 06:02:13 compute-0 sudo[130043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:13 compute-0 python3.9[130045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839332.620998-178-158670128561000/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9526411a5fb20fa5778dda345d6f740cc9258197 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:13 compute-0 sudo[130043]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:13 compute-0 ceph-mon[75251]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:14 compute-0 sudo[130195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxddfvhyrmgbltpxscgzapdicktrsir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839333.8725474-178-210696842575548/AnsiballZ_stat.py'
Jan 31 06:02:14 compute-0 sudo[130195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:14 compute-0 python3.9[130197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:14 compute-0 sudo[130195]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:14 compute-0 sudo[130318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjljplbktnizidxqakugknobzejsrwyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839333.8725474-178-210696842575548/AnsiballZ_copy.py'
Jan 31 06:02:14 compute-0 sudo[130318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:14 compute-0 python3.9[130320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839333.8725474-178-210696842575548/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7cf309fd506e4d34d91313ca01e49c61ca88ebf6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:14 compute-0 sudo[130318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:15 compute-0 sudo[130470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euquzvprjfvnqixbjnildeeeytjhsdhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839335.0831394-178-278460120840052/AnsiballZ_stat.py'
Jan 31 06:02:15 compute-0 sudo[130470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:15 compute-0 python3.9[130472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:15 compute-0 sudo[130470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:15 compute-0 ceph-mon[75251]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:16 compute-0 sudo[130593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duyqhdvqszhsdagitsfsbhqcvdwfpzsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839335.0831394-178-278460120840052/AnsiballZ_copy.py'
Jan 31 06:02:16 compute-0 sudo[130593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:16 compute-0 python3.9[130595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839335.0831394-178-278460120840052/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cb2732a3ecf7cfeaf492f8f20af361fe6045d916 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:16 compute-0 sudo[130593]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:17 compute-0 sudo[130745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfzzventzzjxxelawlegknvgilibygjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839336.8608-238-178359133128181/AnsiballZ_file.py'
Jan 31 06:02:17 compute-0 sudo[130745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:17 compute-0 python3.9[130747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:17 compute-0 sudo[130745]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:17 compute-0 sudo[130897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnzbemfeahfgstyxzlqbcqlgrgdcbcvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839337.4377728-246-176144162874938/AnsiballZ_stat.py'
Jan 31 06:02:17 compute-0 sudo[130897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:17 compute-0 python3.9[130899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:17 compute-0 sudo[130897]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:18 compute-0 sudo[131020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfhfybhaizprisipbnqfwpfkatzcwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839337.4377728-246-176144162874938/AnsiballZ_copy.py'
Jan 31 06:02:18 compute-0 sudo[131020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:18 compute-0 python3.9[131022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839337.4377728-246-176144162874938/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:18 compute-0 sudo[131020]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:18 compute-0 ceph-mon[75251]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:18 compute-0 sudo[131172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esuoyfnanmzxxotatjmkpzdjxctytgkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839338.5988765-262-2418740276466/AnsiballZ_file.py'
Jan 31 06:02:18 compute-0 sudo[131172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:19 compute-0 python3.9[131174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:19 compute-0 sudo[131172]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:19 compute-0 sudo[131324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhwvjwdrghibfbuovwqutmsctxkghks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839339.157648-270-37991104754181/AnsiballZ_stat.py'
Jan 31 06:02:19 compute-0 sudo[131324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:19 compute-0 python3.9[131326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:19 compute-0 sudo[131324]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:20 compute-0 sudo[131447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcicvndrqqcwsyyycruhqdzgnzziwqak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839339.157648-270-37991104754181/AnsiballZ_copy.py'
Jan 31 06:02:20 compute-0 sudo[131447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:20 compute-0 python3.9[131449]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839339.157648-270-37991104754181/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:20 compute-0 sudo[131447]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:20 compute-0 ceph-mon[75251]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:20 compute-0 sudo[131599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skketzvhfnxvoiucrllafmeiqnetouxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839340.4289-286-123480811997306/AnsiballZ_file.py'
Jan 31 06:02:20 compute-0 sudo[131599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:20 compute-0 python3.9[131601]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:20 compute-0 sudo[131599]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:21 compute-0 sudo[131751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhbbphnakhwcmuzpzwnyvuzudwtheor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839340.9355078-294-261461277019375/AnsiballZ_stat.py'
Jan 31 06:02:21 compute-0 sudo[131751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:21 compute-0 python3.9[131753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:21 compute-0 sudo[131751]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:21 compute-0 sudo[131874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdtpaatfcpoesvdjxzsimhkgcjklfbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839340.9355078-294-261461277019375/AnsiballZ_copy.py'
Jan 31 06:02:21 compute-0 sudo[131874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:21 compute-0 python3.9[131876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839340.9355078-294-261461277019375/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:21 compute-0 sudo[131874]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:22 compute-0 sudo[132026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhmcdvbinzjjkadgzlgasxufykscnrum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839341.9514034-310-135713090965901/AnsiballZ_file.py'
Jan 31 06:02:22 compute-0 sudo[132026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:22 compute-0 python3.9[132028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:22 compute-0 sudo[132026]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:22 compute-0 ceph-mon[75251]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:22 compute-0 sudo[132178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfzxkgbdmodntbdeqlvaxqtegtqycunk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839342.5598226-318-41684205957114/AnsiballZ_stat.py'
Jan 31 06:02:22 compute-0 sudo[132178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:22 compute-0 python3.9[132180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:23 compute-0 sudo[132178]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:23 compute-0 sudo[132301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cijriqaagrdvxgctpgdumiwzmnjemiqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839342.5598226-318-41684205957114/AnsiballZ_copy.py'
Jan 31 06:02:23 compute-0 sudo[132301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:23 compute-0 python3.9[132303]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839342.5598226-318-41684205957114/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:23 compute-0 sudo[132301]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:23 compute-0 sudo[132453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dovlqzfpgzxickfwxtrcodvivvbnsmny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839343.708335-334-183523122794187/AnsiballZ_file.py'
Jan 31 06:02:23 compute-0 sudo[132453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:24 compute-0 python3.9[132455]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:24 compute-0 sudo[132453]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:24 compute-0 ceph-mon[75251]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:24 compute-0 sudo[132605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoojiijdszfmapbwvqnrihszwwtferyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839344.3157003-342-260305057747799/AnsiballZ_stat.py'
Jan 31 06:02:24 compute-0 sudo[132605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:24 compute-0 python3.9[132607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:24 compute-0 sudo[132605]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:25 compute-0 sudo[132728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffowfufeeuowhanimvfgyjemaravhzmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839344.3157003-342-260305057747799/AnsiballZ_copy.py'
Jan 31 06:02:25 compute-0 sudo[132728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:25 compute-0 python3.9[132730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839344.3157003-342-260305057747799/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:25 compute-0 sudo[132728]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:25 compute-0 sudo[132880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxlzatrwhifzucpinkhlwerzfanqycja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839345.6161427-358-114936493103829/AnsiballZ_file.py'
Jan 31 06:02:25 compute-0 sudo[132880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:26 compute-0 python3.9[132882]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:26 compute-0 sudo[132880]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:26 compute-0 sudo[133032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiripdqtoforpyjgoadkctznhyawlsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839346.1810715-366-275426710402514/AnsiballZ_stat.py'
Jan 31 06:02:26 compute-0 sudo[133032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:26 compute-0 python3.9[133034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:26 compute-0 sudo[133032]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:26 compute-0 ceph-mon[75251]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:26 compute-0 sudo[133155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etqkruqdfovinaupqjocbuoevvedvmiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839346.1810715-366-275426710402514/AnsiballZ_copy.py'
Jan 31 06:02:26 compute-0 sudo[133155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:27 compute-0 python3.9[133157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839346.1810715-366-275426710402514/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=51097f97821b38d376db29a43d97251b98a9bbe7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:27 compute-0 sudo[133155]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:27 compute-0 sshd-session[127054]: Connection closed by 192.168.122.30 port 53238
Jan 31 06:02:27 compute-0 sshd-session[127051]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:02:27 compute-0 systemd-logind[797]: Session 43 logged out. Waiting for processes to exit.
Jan 31 06:02:27 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 06:02:27 compute-0 systemd[1]: session-43.scope: Consumed 18.940s CPU time.
Jan 31 06:02:27 compute-0 systemd-logind[797]: Removed session 43.
Jan 31 06:02:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:28 compute-0 ceph-mon[75251]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:29 compute-0 ceph-mon[75251]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:32 compute-0 ceph-mon[75251]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:33 compute-0 sshd-session[133182]: Accepted publickey for zuul from 192.168.122.30 port 33378 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:02:33 compute-0 systemd-logind[797]: New session 44 of user zuul.
Jan 31 06:02:33 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 31 06:02:33 compute-0 sshd-session[133182]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:02:34 compute-0 sudo[133335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxovkqqwywhxqykykekroxnamjytejhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839353.7292328-17-171881464382977/AnsiballZ_file.py'
Jan 31 06:02:34 compute-0 sudo[133335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:34 compute-0 python3.9[133337]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:34 compute-0 sudo[133335]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:34 compute-0 ceph-mon[75251]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:34 compute-0 sudo[133487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaeeycdqqwsagwrqiliosvhzwbcdkrgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839354.5295682-29-172836005693014/AnsiballZ_stat.py'
Jan 31 06:02:34 compute-0 sudo[133487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:35 compute-0 python3.9[133489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:35 compute-0 sudo[133487]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:35 compute-0 sudo[133610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfyoatimayxhhrvwlpvbamdkjajqxxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839354.5295682-29-172836005693014/AnsiballZ_copy.py'
Jan 31 06:02:35 compute-0 sudo[133610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:35 compute-0 python3.9[133612]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839354.5295682-29-172836005693014/.source.conf _original_basename=ceph.conf follow=False checksum=d1c4a0b27f4f6b5b7443c01265e90334286da9cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:35 compute-0 sudo[133610]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:35 compute-0 ceph-mon[75251]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:35 compute-0 sudo[133762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jersmxnshxnzxxxmllloibrzmguzozaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839355.779345-29-862885795116/AnsiballZ_stat.py'
Jan 31 06:02:35 compute-0 sudo[133762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:36 compute-0 python3.9[133764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:36 compute-0 sudo[133762]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:36 compute-0 sudo[133812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:02:36 compute-0 sudo[133812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:36 compute-0 sudo[133812]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:36 compute-0 sudo[133858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:02:36 compute-0 sudo[133858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:36 compute-0 sudo[133935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgheatpgrsrcsokgfwyasjygqhxhnin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839355.779345-29-862885795116/AnsiballZ_copy.py'
Jan 31 06:02:36 compute-0 sudo[133935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:36 compute-0 python3.9[133937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839355.779345-29-862885795116/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=1e8d85a566d029d8407896eea1d32944048a7d4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:36 compute-0 sudo[133935]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:36 compute-0 sudo[133858]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:02:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:02:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:02:37 compute-0 sudo[133993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:02:37 compute-0 sudo[133993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:37 compute-0 sudo[133993]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:37 compute-0 sshd-session[133185]: Connection closed by 192.168.122.30 port 33378
Jan 31 06:02:37 compute-0 sshd-session[133182]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:02:37 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 06:02:37 compute-0 systemd[1]: session-44.scope: Consumed 2.017s CPU time.
Jan 31 06:02:37 compute-0 systemd-logind[797]: Session 44 logged out. Waiting for processes to exit.
Jan 31 06:02:37 compute-0 systemd-logind[797]: Removed session 44.
Jan 31 06:02:37 compute-0 sudo[134018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:02:37 compute-0 sudo[134018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.263344006 +0000 UTC m=+0.015352246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.375574033 +0000 UTC m=+0.127582253 container create 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 06:02:37 compute-0 systemd[1]: Started libpod-conmon-328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b.scope.
Jan 31 06:02:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.458998763 +0000 UTC m=+0.211007003 container init 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.467065926 +0000 UTC m=+0.219074146 container start 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:37 compute-0 nifty_poitras[134072]: 167 167
Jan 31 06:02:37 compute-0 systemd[1]: libpod-328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b.scope: Deactivated successfully.
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.501633103 +0000 UTC m=+0.253641333 container attach 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.502548798 +0000 UTC m=+0.254557028 container died 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f24c17f7a485ed4f430d0cb5105bf7166bb936260854a69f85084fbff452be75-merged.mount: Deactivated successfully.
Jan 31 06:02:37 compute-0 podman[134055]: 2026-01-31 06:02:37.841185513 +0000 UTC m=+0.593193733 container remove 328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:02:37 compute-0 systemd[1]: libpod-conmon-328993d92be92e62156c98956b0745521f79b70149b578944db95e7363939a4b.scope: Deactivated successfully.
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:37.956514465 +0000 UTC m=+0.019132121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:02:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:02:38 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:02:38 compute-0 ceph-mon[75251]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:38.145374103 +0000 UTC m=+0.207991689 container create f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:02:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:38 compute-0 systemd[1]: Started libpod-conmon-f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763.scope.
Jan 31 06:02:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:38.402894012 +0000 UTC m=+0.465511648 container init f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:38.411170141 +0000 UTC m=+0.473787737 container start f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:38.572822746 +0000 UTC m=+0.635440372 container attach f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:38 compute-0 strange_grothendieck[134114]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:02:38 compute-0 strange_grothendieck[134114]: --> All data devices are unavailable
Jan 31 06:02:38 compute-0 systemd[1]: libpod-f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763.scope: Deactivated successfully.
Jan 31 06:02:38 compute-0 podman[134097]: 2026-01-31 06:02:38.848726624 +0000 UTC m=+0.911344210 container died f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb842d011de978e3538b56cede92658fd3ab8c30164efee25c8f759c16a7bfe-merged.mount: Deactivated successfully.
Jan 31 06:02:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:39 compute-0 podman[134097]: 2026-01-31 06:02:39.775589431 +0000 UTC m=+1.838207017 container remove f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_grothendieck, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:39 compute-0 systemd[1]: libpod-conmon-f120caad3b839280a50f69cfc6e5be38aceca9ab660345f0c58d824aefeb9763.scope: Deactivated successfully.
Jan 31 06:02:39 compute-0 sudo[134018]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:39 compute-0 sudo[134146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:02:39 compute-0 sudo[134146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:39 compute-0 sudo[134146]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:39 compute-0 sudo[134171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:02:39 compute-0 sudo[134171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.278663687 +0000 UTC m=+0.119749336 container create 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.190101816 +0000 UTC m=+0.031187455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:40 compute-0 systemd[1]: Started libpod-conmon-8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f.scope.
Jan 31 06:02:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:40 compute-0 ceph-mon[75251]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.819006194 +0000 UTC m=+0.660091903 container init 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.825089553 +0000 UTC m=+0.666175212 container start 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:02:40 compute-0 epic_noyce[134225]: 167 167
Jan 31 06:02:40 compute-0 systemd[1]: libpod-8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f.scope: Deactivated successfully.
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.862308403 +0000 UTC m=+0.703394032 container attach 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 06:02:40 compute-0 podman[134209]: 2026-01-31 06:02:40.86329661 +0000 UTC m=+0.704382239 container died 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9538c602a1325eba3a696fff12e37b19663fd751cb1704148f8535eca17055c-merged.mount: Deactivated successfully.
Jan 31 06:02:41 compute-0 podman[134209]: 2026-01-31 06:02:41.052447657 +0000 UTC m=+0.893533286 container remove 8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:02:41 compute-0 systemd[1]: libpod-conmon-8926bed69ce02815f80c212524bb2d4b3ae0f5171319c28c9608d83ccb689c9f.scope: Deactivated successfully.
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.240541833 +0000 UTC m=+0.098406465 container create f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.16168467 +0000 UTC m=+0.019549332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:41 compute-0 systemd[1]: Started libpod-conmon-f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe.scope.
Jan 31 06:02:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de14908c1e728fe02be173c9927bae9faec434dcc877357dc1482068b217d4a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de14908c1e728fe02be173c9927bae9faec434dcc877357dc1482068b217d4a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de14908c1e728fe02be173c9927bae9faec434dcc877357dc1482068b217d4a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de14908c1e728fe02be173c9927bae9faec434dcc877357dc1482068b217d4a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.448536971 +0000 UTC m=+0.306401643 container init f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.454723073 +0000 UTC m=+0.312587715 container start f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.657480055 +0000 UTC m=+0.515344767 container attach f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:02:41 compute-0 distracted_herschel[134270]: {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     "0": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "devices": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "/dev/loop3"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             ],
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_name": "ceph_lv0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_size": "21470642176",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "name": "ceph_lv0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "tags": {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_name": "ceph",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.crush_device_class": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.encrypted": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.objectstore": "bluestore",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_id": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.vdo": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.with_tpm": "0"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             },
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "vg_name": "ceph_vg0"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         }
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     ],
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     "1": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "devices": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "/dev/loop4"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             ],
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_name": "ceph_lv1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_size": "21470642176",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "name": "ceph_lv1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "tags": {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_name": "ceph",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.crush_device_class": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.encrypted": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.objectstore": "bluestore",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_id": "1",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.vdo": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.with_tpm": "0"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             },
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "vg_name": "ceph_vg1"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         }
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     ],
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     "2": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "devices": [
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "/dev/loop5"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             ],
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_name": "ceph_lv2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_size": "21470642176",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "name": "ceph_lv2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "tags": {
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.cluster_name": "ceph",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.crush_device_class": "",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.encrypted": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.objectstore": "bluestore",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osd_id": "2",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.vdo": "0",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:                 "ceph.with_tpm": "0"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             },
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "type": "block",
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:             "vg_name": "ceph_vg2"
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:         }
Jan 31 06:02:41 compute-0 distracted_herschel[134270]:     ]
Jan 31 06:02:41 compute-0 distracted_herschel[134270]: }
Jan 31 06:02:41 compute-0 systemd[1]: libpod-f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe.scope: Deactivated successfully.
Jan 31 06:02:41 compute-0 podman[134253]: 2026-01-31 06:02:41.730318132 +0000 UTC m=+0.588182774 container died f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:02:41 compute-0 ceph-mon[75251]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-de14908c1e728fe02be173c9927bae9faec434dcc877357dc1482068b217d4a4-merged.mount: Deactivated successfully.
Jan 31 06:02:42 compute-0 sshd-session[134290]: Accepted publickey for zuul from 192.168.122.30 port 51064 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:02:42 compute-0 systemd-logind[797]: New session 45 of user zuul.
Jan 31 06:02:42 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 31 06:02:42 compute-0 sshd-session[134290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:02:42 compute-0 podman[134253]: 2026-01-31 06:02:42.819936625 +0000 UTC m=+1.677801297 container remove f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:02:42 compute-0 sudo[134171]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:42 compute-0 sudo[134346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:02:42 compute-0 sudo[134346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:42 compute-0 systemd[1]: libpod-conmon-f896595ac5f3f743ed1987a559a9a3b31b282ce6f374a62aa18468eac9c3d3fe.scope: Deactivated successfully.
Jan 31 06:02:42 compute-0 sudo[134346]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:42 compute-0 sudo[134371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:02:42 compute-0 sudo[134371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:43 compute-0 podman[134456]: 2026-01-31 06:02:43.205699984 +0000 UTC m=+0.028611053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:43 compute-0 podman[134456]: 2026-01-31 06:02:43.301917567 +0000 UTC m=+0.124828606 container create e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.304191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363304219, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1503, "num_deletes": 254, "total_data_size": 2173406, "memory_usage": 2200712, "flush_reason": "Manual Compaction"}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363560746, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1275404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7509, "largest_seqno": 9011, "table_properties": {"data_size": 1270340, "index_size": 2204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14241, "raw_average_key_size": 20, "raw_value_size": 1258615, "raw_average_value_size": 1803, "num_data_blocks": 104, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769839221, "oldest_key_time": 1769839221, "file_creation_time": 1769839363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 256613 microseconds, and 2960 cpu microseconds.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.560801) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1275404 bytes OK
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.560824) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 06:02:43 compute-0 systemd[1]: Started libpod-conmon-e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659.scope.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.624259) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.624310) EVENT_LOG_v1 {"time_micros": 1769839363624302, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.624341) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2166663, prev total WAL file size 2193478, number of live WAL files 2.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.625026) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323535' seq:0, type:0; will stop at (end)
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1245KB)], [20(7757KB)]
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363625095, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9218644, "oldest_snapshot_seqno": -1}
Jan 31 06:02:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:43 compute-0 python3.9[134520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3410 keys, 7182868 bytes, temperature: kUnknown
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363873869, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7182868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7156298, "index_size": 16924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81815, "raw_average_key_size": 23, "raw_value_size": 7090943, "raw_average_value_size": 2079, "num_data_blocks": 748, "num_entries": 3410, "num_filter_entries": 3410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769839363, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:02:43 compute-0 podman[134456]: 2026-01-31 06:02:43.902564045 +0000 UTC m=+0.725475074 container init e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:02:43 compute-0 podman[134456]: 2026-01-31 06:02:43.909907738 +0000 UTC m=+0.732818757 container start e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:02:43 compute-0 wizardly_yonath[134523]: 167 167
Jan 31 06:02:43 compute-0 systemd[1]: libpod-e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659.scope: Deactivated successfully.
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.874960) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7182868 bytes
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.956028) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.9 rd, 28.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.6 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(12.9) write-amplify(5.6) OK, records in: 3865, records dropped: 455 output_compression: NoCompression
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.956064) EVENT_LOG_v1 {"time_micros": 1769839363956050, "job": 6, "event": "compaction_finished", "compaction_time_micros": 249612, "compaction_time_cpu_micros": 26405, "output_level": 6, "num_output_files": 1, "total_output_size": 7182868, "num_input_records": 3865, "num_output_records": 3410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363956420, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839363957076, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.624889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.957242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.957250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.957253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.957256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:43 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:02:43.957258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:02:44 compute-0 podman[134456]: 2026-01-31 06:02:44.033235462 +0000 UTC m=+0.856146481 container attach e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:02:44 compute-0 podman[134456]: 2026-01-31 06:02:44.033594782 +0000 UTC m=+0.856505801 container died e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e2db38af5308e54bfcc226b8bc31fac926c6ea3188d446b81f26b82ab01c2a2-merged.mount: Deactivated successfully.
Jan 31 06:02:44 compute-0 podman[134456]: 2026-01-31 06:02:44.308916972 +0000 UTC m=+1.131827991 container remove e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 06:02:44 compute-0 systemd[1]: libpod-conmon-e870f30ba0a273946538fad8d054c17a6ac14f25eaecc7800e86fd990bfe4659.scope: Deactivated successfully.
Jan 31 06:02:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:02:44
Jan 31 06:02:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:02:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:02:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'default.rgw.control']
Jan 31 06:02:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:02:44 compute-0 podman[134634]: 2026-01-31 06:02:44.42804404 +0000 UTC m=+0.049335007 container create f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:02:44 compute-0 systemd[1]: Started libpod-conmon-f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf.scope.
Jan 31 06:02:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:02:44 compute-0 podman[134634]: 2026-01-31 06:02:44.399832799 +0000 UTC m=+0.021123786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930cc41bc097348a9566de2e175c3290c15cc81a314c773e89843ee1fbd9b439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930cc41bc097348a9566de2e175c3290c15cc81a314c773e89843ee1fbd9b439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930cc41bc097348a9566de2e175c3290c15cc81a314c773e89843ee1fbd9b439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/930cc41bc097348a9566de2e175c3290c15cc81a314c773e89843ee1fbd9b439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:02:44 compute-0 sudo[134720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dermglnsfmrcnzvqnmvqksqsgdlatrai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839364.1243496-29-162417016490896/AnsiballZ_file.py'
Jan 31 06:02:44 compute-0 sudo[134720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:44 compute-0 podman[134634]: 2026-01-31 06:02:44.556089265 +0000 UTC m=+0.177380252 container init f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:44 compute-0 podman[134634]: 2026-01-31 06:02:44.561767952 +0000 UTC m=+0.183058919 container start f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 06:02:44 compute-0 podman[134634]: 2026-01-31 06:02:44.605213215 +0000 UTC m=+0.226504182 container attach f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:02:44 compute-0 python3.9[134722]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:44 compute-0 sudo[134720]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:44 compute-0 ceph-mon[75251]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:45 compute-0 sudo[134939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxuufauphtpyymbzuxfgqpwogljdquok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839364.8557832-29-167486728736625/AnsiballZ_file.py'
Jan 31 06:02:45 compute-0 sudo[134939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:45 compute-0 lvm[134948]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:02:45 compute-0 lvm[134948]: VG ceph_vg0 finished
Jan 31 06:02:45 compute-0 lvm[134951]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:02:45 compute-0 lvm[134951]: VG ceph_vg1 finished
Jan 31 06:02:45 compute-0 lvm[134953]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:02:45 compute-0 lvm[134953]: VG ceph_vg2 finished
Jan 31 06:02:45 compute-0 elastic_cori[134691]: {}
Jan 31 06:02:45 compute-0 python3.9[134941]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:45 compute-0 systemd[1]: libpod-f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf.scope: Deactivated successfully.
Jan 31 06:02:45 compute-0 podman[134634]: 2026-01-31 06:02:45.303040822 +0000 UTC m=+0.924331799 container died f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:02:45 compute-0 sudo[134939]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:02:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-930cc41bc097348a9566de2e175c3290c15cc81a314c773e89843ee1fbd9b439-merged.mount: Deactivated successfully.
Jan 31 06:02:46 compute-0 ceph-mon[75251]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:46 compute-0 python3.9[135117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:02:46 compute-0 podman[134634]: 2026-01-31 06:02:46.159313156 +0000 UTC m=+1.780604113 container remove f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:02:46 compute-0 sudo[134371]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:02:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:02:46 compute-0 systemd[1]: libpod-conmon-f6112a81662939bdf0faf0f39556ce2d9dc527e0bd36212607d97e31869906bf.scope: Deactivated successfully.
Jan 31 06:02:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:46 compute-0 sudo[135194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:02:46 compute-0 sudo[135194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:02:46 compute-0 sudo[135194]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:46 compute-0 sudo[135292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpizfjdzhbacxqdfryozfahphcnjxhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839366.2147756-52-261104149630125/AnsiballZ_seboolean.py'
Jan 31 06:02:46 compute-0 sudo[135292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:46 compute-0 python3.9[135294]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 06:02:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:02:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:48 compute-0 sudo[135292]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:48 compute-0 ceph-mon[75251]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:48 compute-0 sudo[135448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdaygsncgwnivxkerbpxbuvjuchuedrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839368.6109354-62-132390599831564/AnsiballZ_setup.py'
Jan 31 06:02:48 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 06:02:48 compute-0 sudo[135448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:49 compute-0 python3.9[135450]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:02:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:49 compute-0 sudo[135448]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:49 compute-0 sudo[135532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyvdmlreidttgckjhmomslxwlojuuerf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839368.6109354-62-132390599831564/AnsiballZ_dnf.py'
Jan 31 06:02:49 compute-0 sudo[135532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:50 compute-0 python3.9[135534]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:02:50 compute-0 ceph-mon[75251]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:51 compute-0 sudo[135532]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:52 compute-0 sudo[135685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoyiflorhoqqfdjlgexjkrwselidgscy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839371.6392293-74-191924960532309/AnsiballZ_systemd.py'
Jan 31 06:02:52 compute-0 sudo[135685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:52 compute-0 python3.9[135687]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:02:52 compute-0 sudo[135685]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:52 compute-0 ceph-mon[75251]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:53 compute-0 sudo[135840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpizcsmmdemoxfgnmhhcrwrodcudlvgd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839372.8216136-82-110107688268027/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 06:02:53 compute-0 sudo[135840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:53 compute-0 python3[135842]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 06:02:53 compute-0 sudo[135840]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:53 compute-0 ceph-mon[75251]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:53 compute-0 sudo[135992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltahsfllhkingxcufrnjwlwwvhqkhkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839373.741663-91-205952931359181/AnsiballZ_file.py'
Jan 31 06:02:53 compute-0 sudo[135992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:54 compute-0 python3.9[135994]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:54 compute-0 sudo[135992]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:54 compute-0 sudo[136144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lijkwupbwiebvhmsdkxerrrsajoghctc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839374.3485844-99-220488735481672/AnsiballZ_stat.py'
Jan 31 06:02:54 compute-0 sudo[136144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:54 compute-0 python3.9[136146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:54 compute-0 sudo[136144]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:55 compute-0 sudo[136222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqroifavnhepacyecglkketsokrueiyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839374.3485844-99-220488735481672/AnsiballZ_file.py'
Jan 31 06:02:55 compute-0 sudo[136222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:55 compute-0 python3.9[136224]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:55 compute-0 sudo[136222]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:02:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:02:55 compute-0 sudo[136374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshimbljgrpsbasklzkvsutgszriwdry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839375.5241432-111-117304734797921/AnsiballZ_stat.py'
Jan 31 06:02:55 compute-0 sudo[136374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:55 compute-0 python3.9[136376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:55 compute-0 sudo[136374]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:56 compute-0 sudo[136452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfhnangsxnhldkeeihtcqybonvadgtzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839375.5241432-111-117304734797921/AnsiballZ_file.py'
Jan 31 06:02:56 compute-0 sudo[136452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:56 compute-0 python3.9[136454]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ss8kjvv9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:56 compute-0 sudo[136452]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:56 compute-0 ceph-mon[75251]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:56 compute-0 sudo[136604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mricybjoeglmtkzarwgaxnrgxiocnzqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839376.6361916-123-243964701817217/AnsiballZ_stat.py'
Jan 31 06:02:56 compute-0 sudo[136604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:57 compute-0 python3.9[136606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:57 compute-0 sudo[136604]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:57 compute-0 sudo[136682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmuvcgukxnvyoofsamzzfqqywtxuphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839376.6361916-123-243964701817217/AnsiballZ_file.py'
Jan 31 06:02:57 compute-0 sudo[136682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:57 compute-0 python3.9[136684]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:02:57 compute-0 sudo[136682]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:58 compute-0 sudo[136834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvfxswkmxmimumhjbyqxeeynhbjdvyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839377.7206876-136-10681112075126/AnsiballZ_command.py'
Jan 31 06:02:58 compute-0 sudo[136834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:58 compute-0 python3.9[136836]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:02:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:02:58 compute-0 sudo[136834]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:58 compute-0 ceph-mon[75251]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:58 compute-0 sudo[136987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoyxsyssjzuzhoqskbmjazrkrrlcfier ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839378.4875853-144-201628350319215/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:02:58 compute-0 sudo[136987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:59 compute-0 python3[136989]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:02:59 compute-0 sudo[136987]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:02:59 compute-0 sudo[137139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wssrhcihzdvvoknmwceivdsqxarkxhfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839379.305778-152-33741397689199/AnsiballZ_stat.py'
Jan 31 06:02:59 compute-0 sudo[137139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:02:59 compute-0 python3.9[137141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:02:59 compute-0 sudo[137139]: pam_unix(sudo:session): session closed for user root
Jan 31 06:02:59 compute-0 ceph-mon[75251]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:00 compute-0 sudo[137264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnugdtlqvebjgiwxcoilwfmmlvhfugvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839379.305778-152-33741397689199/AnsiballZ_copy.py'
Jan 31 06:03:00 compute-0 sudo[137264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:00 compute-0 python3.9[137266]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839379.305778-152-33741397689199/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:00 compute-0 sudo[137264]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:00 compute-0 sudo[137416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lovmisvsfyvbfgrvmsfeapxwkfivvtrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839380.618328-167-53828787296143/AnsiballZ_stat.py'
Jan 31 06:03:00 compute-0 sudo[137416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:01 compute-0 python3.9[137418]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:01 compute-0 sudo[137416]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:01 compute-0 sudo[137541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxaxiwnixdnyaspvmuyqtmeoxcyanonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839380.618328-167-53828787296143/AnsiballZ_copy.py'
Jan 31 06:03:01 compute-0 sudo[137541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:01 compute-0 python3.9[137543]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839380.618328-167-53828787296143/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:01 compute-0 sudo[137541]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:01 compute-0 sudo[137693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmbppcsaxhtfejplpjugfplgxnitkyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839381.7705142-182-14958925495982/AnsiballZ_stat.py'
Jan 31 06:03:01 compute-0 sudo[137693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:02 compute-0 python3.9[137695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:02 compute-0 sudo[137693]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:02 compute-0 sudo[137818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbuppdkxjsdgmstolwcchdtknqbncsef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839381.7705142-182-14958925495982/AnsiballZ_copy.py'
Jan 31 06:03:02 compute-0 sudo[137818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:02 compute-0 ceph-mon[75251]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:02 compute-0 python3.9[137820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839381.7705142-182-14958925495982/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:02 compute-0 sudo[137818]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:03 compute-0 sudo[137970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qulvpkppvjcvzxbhelmtnhtwnlkuroks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839382.9308069-197-176656425788712/AnsiballZ_stat.py'
Jan 31 06:03:03 compute-0 sudo[137970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:03 compute-0 python3.9[137972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:03 compute-0 sudo[137970]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:03 compute-0 ceph-mon[75251]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:03 compute-0 sudo[138095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbbthjznqumtyqwjqgllpzgmaazxrsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839382.9308069-197-176656425788712/AnsiballZ_copy.py'
Jan 31 06:03:03 compute-0 sudo[138095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:04 compute-0 python3.9[138097]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839382.9308069-197-176656425788712/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:04 compute-0 sudo[138095]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:04 compute-0 sudo[138247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvenoqbdqcsbxennqcrfnbboefbydani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839384.361256-212-267852697806655/AnsiballZ_stat.py'
Jan 31 06:03:04 compute-0 sudo[138247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:05 compute-0 python3.9[138249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:05 compute-0 sudo[138247]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:05 compute-0 sudo[138372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwowhasnwxrwzpwitdlfwdgvyimpmrjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839384.361256-212-267852697806655/AnsiballZ_copy.py'
Jan 31 06:03:05 compute-0 sudo[138372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:05 compute-0 python3.9[138374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839384.361256-212-267852697806655/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:05 compute-0 sudo[138372]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:05 compute-0 sudo[138524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqylbahlwrnygwqxpggnsqepqvhesbmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839385.667763-227-126219400890855/AnsiballZ_file.py'
Jan 31 06:03:05 compute-0 sudo[138524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:06 compute-0 python3.9[138526]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:06 compute-0 sudo[138524]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:06 compute-0 ceph-mon[75251]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:06 compute-0 sudo[138676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgkcaprhhpamelgiaknapozthdjhtfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839386.3664467-235-261875493130121/AnsiballZ_command.py'
Jan 31 06:03:06 compute-0 sudo[138676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:06 compute-0 python3.9[138678]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:06 compute-0 sudo[138676]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:07 compute-0 sudo[138831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnfrobdxspsejggiuyvolcljhdheslzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839387.0103014-243-241085225975795/AnsiballZ_blockinfile.py'
Jan 31 06:03:07 compute-0 sudo[138831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:07 compute-0 python3.9[138833]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:07 compute-0 sudo[138831]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:08 compute-0 sudo[138983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxsnajjuaynahfbkrttblfvhncyztzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839387.7934012-252-278799914046736/AnsiballZ_command.py'
Jan 31 06:03:08 compute-0 sudo[138983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:08 compute-0 python3.9[138985]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:08 compute-0 sudo[138983]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:08 compute-0 sudo[139136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sagfdtrjwdulzbwciuyjjrjqxxviwpkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839388.4145684-260-215677284229616/AnsiballZ_stat.py'
Jan 31 06:03:08 compute-0 sudo[139136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:08 compute-0 ceph-mon[75251]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:08 compute-0 python3.9[139138]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:03:08 compute-0 sudo[139136]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:09 compute-0 sudo[139290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swuuqbkhucrfalcnettvhfnyhzgymjer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839388.9733076-268-63525829751080/AnsiballZ_command.py'
Jan 31 06:03:09 compute-0 sudo[139290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:09 compute-0 python3.9[139292]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:09 compute-0 sudo[139290]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:09 compute-0 sudo[139445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzxcffabsvfhkmlzoisjyisdmeygvvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839389.5775573-276-253553073940582/AnsiballZ_file.py'
Jan 31 06:03:09 compute-0 sudo[139445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:09 compute-0 ceph-mon[75251]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:09 compute-0 python3.9[139447]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:09 compute-0 sudo[139445]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:11 compute-0 python3.9[139597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:03:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:11 compute-0 sudo[139748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-terqwxqyyuimnqktbawozherdgcrtxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839391.5753293-316-219301746507989/AnsiballZ_command.py'
Jan 31 06:03:11 compute-0 sudo[139748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:11 compute-0 python3.9[139750]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:11 compute-0 ovs-vsctl[139751]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 06:03:11 compute-0 sudo[139748]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:12 compute-0 sudo[139901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpzjbkhmeutmenibsnylbdsgkvobhbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839392.1185431-325-220953129613085/AnsiballZ_command.py'
Jan 31 06:03:12 compute-0 sudo[139901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:12 compute-0 ceph-mon[75251]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:12 compute-0 python3.9[139903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:12 compute-0 sudo[139901]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:12 compute-0 sudo[140056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uevsqvrltcatjmkkowdwoqomysugkwni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839392.6153808-333-220066911661992/AnsiballZ_command.py'
Jan 31 06:03:12 compute-0 sudo[140056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:12 compute-0 python3.9[140058]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:12 compute-0 ovs-vsctl[140059]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 06:03:13 compute-0 sudo[140056]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:13 compute-0 python3.9[140209]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:03:14 compute-0 sudo[140361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbbhbdfzhcnvtosnsxdnqpivdkvrovft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839393.8289406-350-189905183904697/AnsiballZ_file.py'
Jan 31 06:03:14 compute-0 sudo[140361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:14 compute-0 python3.9[140363]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:14 compute-0 sudo[140361]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:14 compute-0 ceph-mon[75251]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:14 compute-0 sudo[140513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzbmvaybyctmqcgjgcbgafqmvxsadarq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839394.426336-358-120369619790020/AnsiballZ_stat.py'
Jan 31 06:03:14 compute-0 sudo[140513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:14 compute-0 python3.9[140515]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:14 compute-0 sudo[140513]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:15 compute-0 sudo[140591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfybrefluefbfokvmdhmipbzoyviakwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839394.426336-358-120369619790020/AnsiballZ_file.py'
Jan 31 06:03:15 compute-0 sudo[140591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:15 compute-0 python3.9[140593]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:15 compute-0 sudo[140591]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:15 compute-0 sudo[140743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afzbfqlhikzoqdoiqivjdofvvtzgmekv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839395.346948-358-6131123448544/AnsiballZ_stat.py'
Jan 31 06:03:15 compute-0 sudo[140743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:15 compute-0 python3.9[140745]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:15 compute-0 sudo[140743]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:16 compute-0 sudo[140821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-midezihgwxzblyqndvvizrmpyplabdmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839395.346948-358-6131123448544/AnsiballZ_file.py'
Jan 31 06:03:16 compute-0 sudo[140821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:16 compute-0 python3.9[140823]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:16 compute-0 sudo[140821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:16 compute-0 ceph-mon[75251]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:16 compute-0 sudo[140973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxeganswqvutkamjmrcionxmryubabml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839396.3793232-381-153840870041727/AnsiballZ_file.py'
Jan 31 06:03:16 compute-0 sudo[140973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:16 compute-0 python3.9[140975]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:16 compute-0 sudo[140973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:17 compute-0 sudo[141125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnoedvwhfcrunqbilzfyeneozlugtygh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839396.9359283-389-91851251012497/AnsiballZ_stat.py'
Jan 31 06:03:17 compute-0 sudo[141125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:17 compute-0 python3.9[141127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:17 compute-0 sudo[141125]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:17 compute-0 sudo[141203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jconcowtsvjqhxtupqbmpaafygxjcmcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839396.9359283-389-91851251012497/AnsiballZ_file.py'
Jan 31 06:03:17 compute-0 sudo[141203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:17 compute-0 python3.9[141205]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:17 compute-0 sudo[141203]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:18 compute-0 sudo[141355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfvcpjqszvoupwybfyyxcbadybqbsqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839397.8562655-401-51367006978692/AnsiballZ_stat.py'
Jan 31 06:03:18 compute-0 sudo[141355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:18 compute-0 python3.9[141357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:18 compute-0 sudo[141355]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:18 compute-0 ceph-mon[75251]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:18 compute-0 sudo[141433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbwzkensbztusbboqhbrwtnyshkwirg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839397.8562655-401-51367006978692/AnsiballZ_file.py'
Jan 31 06:03:18 compute-0 sudo[141433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:18 compute-0 python3.9[141435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:18 compute-0 sudo[141433]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:19 compute-0 sudo[141585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gczygdytggcdzjlwnzsfojpsxhkuamzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839398.8911762-413-277404044708693/AnsiballZ_systemd.py'
Jan 31 06:03:19 compute-0 sudo[141585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:19 compute-0 python3.9[141587]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:03:19 compute-0 systemd[1]: Reloading.
Jan 31 06:03:19 compute-0 systemd-rc-local-generator[141614]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:03:19 compute-0 systemd-sysv-generator[141618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:03:19 compute-0 sudo[141585]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:20 compute-0 sudo[141774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlbjvmqcxqfupxxrxmeeiwilhhwvloio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839399.9183865-421-223712346052091/AnsiballZ_stat.py'
Jan 31 06:03:20 compute-0 sudo[141774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:20 compute-0 python3.9[141776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:20 compute-0 sudo[141774]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:20 compute-0 ceph-mon[75251]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:20 compute-0 sudo[141852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqbbkkzfqomnactrzkfedhqqxvtckbqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839399.9183865-421-223712346052091/AnsiballZ_file.py'
Jan 31 06:03:20 compute-0 sudo[141852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:20 compute-0 python3.9[141854]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:20 compute-0 sudo[141852]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:21 compute-0 sudo[142004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdbutdchfnogivykswudwwrhewylfdoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839401.0228786-433-69216633151032/AnsiballZ_stat.py'
Jan 31 06:03:21 compute-0 sudo[142004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:21 compute-0 python3.9[142006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:21 compute-0 sudo[142004]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:21 compute-0 sudo[142082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vghwakrdlyjlasskgooncxvfbenrdwxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839401.0228786-433-69216633151032/AnsiballZ_file.py'
Jan 31 06:03:21 compute-0 sudo[142082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:21 compute-0 python3.9[142084]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:21 compute-0 sudo[142082]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:22 compute-0 sudo[142234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxztlcwzecwjhhmilbhqhqexjgabtqcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839402.0452273-445-206024481285313/AnsiballZ_systemd.py'
Jan 31 06:03:22 compute-0 sudo[142234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:22 compute-0 ceph-mon[75251]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:22 compute-0 python3.9[142236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:03:22 compute-0 systemd[1]: Reloading.
Jan 31 06:03:22 compute-0 systemd-rc-local-generator[142260]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:03:22 compute-0 systemd-sysv-generator[142264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:03:22 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 06:03:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:03:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:03:22 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 06:03:22 compute-0 sudo[142234]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:23 compute-0 sudo[142427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqsnkuurgtalfdowzkcokhqyqifaucc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839403.2248073-455-197413416518693/AnsiballZ_file.py'
Jan 31 06:03:23 compute-0 sudo[142427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:23 compute-0 python3.9[142429]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:23 compute-0 sudo[142427]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:24 compute-0 sudo[142579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdmdcvxnubmljdgjzbflcvtuszwfkuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839403.9494438-463-268418708057547/AnsiballZ_stat.py'
Jan 31 06:03:24 compute-0 sudo[142579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:24 compute-0 python3.9[142581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:24 compute-0 sudo[142579]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:24 compute-0 ceph-mon[75251]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:24 compute-0 sudo[142702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thalwzhfqedsqvhsqennalefqzmbigag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839403.9494438-463-268418708057547/AnsiballZ_copy.py'
Jan 31 06:03:24 compute-0 sudo[142702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:24 compute-0 python3.9[142704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839403.9494438-463-268418708057547/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:24 compute-0 sudo[142702]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:25 compute-0 sudo[142854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axuaizzyqeaoxqhhonymfriemlrqdnwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839405.2818928-480-214647732072496/AnsiballZ_file.py'
Jan 31 06:03:25 compute-0 sudo[142854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:25 compute-0 ceph-mon[75251]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:25 compute-0 python3.9[142856]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:25 compute-0 sudo[142854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:26 compute-0 sudo[143006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-schrdekirmujnqqdvtnzblhgosemspxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839406.0065155-488-29850831689590/AnsiballZ_file.py'
Jan 31 06:03:26 compute-0 sudo[143006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:26 compute-0 python3.9[143008]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:26 compute-0 sudo[143006]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:03:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2056 writes, 9198 keys, 2056 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2056 writes, 2056 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2056 writes, 9198 keys, 2056 commit groups, 1.0 writes per commit group, ingest: 12.11 MB, 0.02 MB/s
                                           Interval WAL: 2056 writes, 2056 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.0      0.52              0.03         3    0.173       0      0       0.0       0.0
                                             L6      1/0    6.85 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     28.2     24.8      0.58              0.05         2    0.291    7321    744       0.0       0.0
                                            Sum      1/0    6.85 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     14.9     21.1      1.10              0.08         5    0.220    7321    744       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     14.9     21.1      1.10              0.08         4    0.275    7321    744       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     28.2     24.8      0.58              0.05         2    0.291    7321    744       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.0      0.52              0.03         2    0.258       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.1 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2e66f78d0#2 capacity: 308.00 MB usage: 672.95 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(40,581.42 KB,0.184349%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 06:03:26 compute-0 sudo[143158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvcqocnzsprbtxcitdnfebwngymhkmio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839406.7155614-496-177307406213766/AnsiballZ_stat.py'
Jan 31 06:03:26 compute-0 sudo[143158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:27 compute-0 python3.9[143160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:27 compute-0 sudo[143158]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:27 compute-0 sudo[143281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujxwowhpknknpyukacxjdyflqnnaoklx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839406.7155614-496-177307406213766/AnsiballZ_copy.py'
Jan 31 06:03:27 compute-0 sudo[143281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:27 compute-0 python3.9[143283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839406.7155614-496-177307406213766/.source.json _original_basename=.don6qn_9 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:27 compute-0 sudo[143281]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:28 compute-0 python3.9[143433]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:28 compute-0 ceph-mon[75251]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:30 compute-0 sudo[143854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmhrnabcwulglysiovsivjedfijfizbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839409.8129213-536-192535298738188/AnsiballZ_container_config_data.py'
Jan 31 06:03:30 compute-0 sudo[143854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:30 compute-0 ceph-mon[75251]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:30 compute-0 python3.9[143856]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 06:03:30 compute-0 sudo[143854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:31 compute-0 sudo[144006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xenfcjjxmrbokepapnfmgukeipjhvyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839410.764172-547-153423466743160/AnsiballZ_container_config_hash.py'
Jan 31 06:03:31 compute-0 sudo[144006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:31 compute-0 python3.9[144008]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:03:31 compute-0 sudo[144006]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:32 compute-0 sudo[144158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkciacfteljnqmyvzxtvntpjzwlacgt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839411.6636982-557-116991161190496/AnsiballZ_edpm_container_manage.py'
Jan 31 06:03:32 compute-0 sudo[144158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:32 compute-0 python3[144160]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:03:32 compute-0 ceph-mon[75251]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:34 compute-0 ceph-mon[75251]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:36 compute-0 ceph-mon[75251]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:37 compute-0 podman[144174]: 2026-01-31 06:03:37.10009249 +0000 UTC m=+4.628985687 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:03:37 compute-0 podman[144294]: 2026-01-31 06:03:37.181624596 +0000 UTC m=+0.018050447 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:03:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:37 compute-0 podman[144294]: 2026-01-31 06:03:37.891326654 +0000 UTC m=+0.727752495 container create 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 31 06:03:37 compute-0 python3[144160]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 06:03:38 compute-0 sudo[144158]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:38 compute-0 sudo[144482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-minerowtxjcvqyxwlwtktblnjuqdgdrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839418.177231-565-165145865091092/AnsiballZ_stat.py'
Jan 31 06:03:38 compute-0 sudo[144482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:38 compute-0 ceph-mon[75251]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:38 compute-0 python3.9[144484]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:03:38 compute-0 sudo[144482]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:39 compute-0 sudo[144636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uatxlznjdajxfunisuqpcnurvglcayfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839419.036411-574-4080982246124/AnsiballZ_file.py'
Jan 31 06:03:39 compute-0 sudo[144636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:39 compute-0 python3.9[144638]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:39 compute-0 sudo[144636]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:39 compute-0 sudo[144712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqvrgzaerpfhifxbgzllpkwwrbwlple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839419.036411-574-4080982246124/AnsiballZ_stat.py'
Jan 31 06:03:39 compute-0 sudo[144712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:39 compute-0 python3.9[144714]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:03:39 compute-0 sudo[144712]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:40 compute-0 sudo[144863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mssghtfjhwzmatarojdxeaxozzkvkegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839420.0471287-574-130020840779587/AnsiballZ_copy.py'
Jan 31 06:03:40 compute-0 sudo[144863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:40 compute-0 python3.9[144865]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839420.0471287-574-130020840779587/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:40 compute-0 sudo[144863]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:40 compute-0 ceph-mon[75251]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:40 compute-0 sudo[144939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmibrzgswyxljdmjnkhyjcgcruedetgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839420.0471287-574-130020840779587/AnsiballZ_systemd.py'
Jan 31 06:03:40 compute-0 sudo[144939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:41 compute-0 python3.9[144941]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:03:41 compute-0 systemd[1]: Reloading.
Jan 31 06:03:41 compute-0 systemd-rc-local-generator[144969]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:03:41 compute-0 systemd-sysv-generator[144972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:03:41 compute-0 sudo[144939]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:41 compute-0 ceph-mon[75251]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:41 compute-0 sudo[145050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgcvwtsjegeqmikghftzrllyslywwqox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839420.0471287-574-130020840779587/AnsiballZ_systemd.py'
Jan 31 06:03:41 compute-0 sudo[145050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:42 compute-0 python3.9[145052]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:03:42 compute-0 systemd[1]: Reloading.
Jan 31 06:03:42 compute-0 systemd-sysv-generator[145084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:03:42 compute-0 systemd-rc-local-generator[145081]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:03:42 compute-0 systemd[1]: Starting ovn_controller container...
Jan 31 06:03:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4bec6be69311c468abf4a623c3fef4a6e05f2ef0ea0c7c22293031daab38671/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695.
Jan 31 06:03:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:43 compute-0 podman[145093]: 2026-01-31 06:03:43.575961718 +0000 UTC m=+0.864510120 container init 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:43 compute-0 ovn_controller[145108]: + sudo -E kolla_set_configs
Jan 31 06:03:43 compute-0 podman[145093]: 2026-01-31 06:03:43.600024772 +0000 UTC m=+0.888573074 container start 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:03:43 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 31 06:03:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 06:03:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 06:03:43 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 31 06:03:43 compute-0 systemd[145126]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 06:03:43 compute-0 systemd[145126]: Queued start job for default target Main User Target.
Jan 31 06:03:43 compute-0 systemd[145126]: Created slice User Application Slice.
Jan 31 06:03:43 compute-0 systemd[145126]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 06:03:43 compute-0 systemd[145126]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 06:03:43 compute-0 systemd[145126]: Reached target Paths.
Jan 31 06:03:43 compute-0 systemd[145126]: Reached target Timers.
Jan 31 06:03:43 compute-0 systemd[145126]: Starting D-Bus User Message Bus Socket...
Jan 31 06:03:43 compute-0 systemd[145126]: Starting Create User's Volatile Files and Directories...
Jan 31 06:03:43 compute-0 systemd[145126]: Listening on D-Bus User Message Bus Socket.
Jan 31 06:03:43 compute-0 systemd[145126]: Reached target Sockets.
Jan 31 06:03:43 compute-0 systemd[145126]: Finished Create User's Volatile Files and Directories.
Jan 31 06:03:43 compute-0 systemd[145126]: Reached target Basic System.
Jan 31 06:03:43 compute-0 systemd[145126]: Reached target Main User Target.
Jan 31 06:03:43 compute-0 systemd[145126]: Startup finished in 109ms.
Jan 31 06:03:43 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 31 06:03:43 compute-0 systemd[1]: Started Session c1 of User root.
Jan 31 06:03:43 compute-0 edpm-start-podman-container[145093]: ovn_controller
Jan 31 06:03:43 compute-0 ovn_controller[145108]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 06:03:43 compute-0 ovn_controller[145108]: INFO:__main__:Validating config file
Jan 31 06:03:43 compute-0 ovn_controller[145108]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 06:03:43 compute-0 ovn_controller[145108]: INFO:__main__:Writing out command to execute
Jan 31 06:03:43 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 06:03:43 compute-0 ovn_controller[145108]: ++ cat /run_command
Jan 31 06:03:43 compute-0 ovn_controller[145108]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 06:03:43 compute-0 ovn_controller[145108]: + ARGS=
Jan 31 06:03:43 compute-0 ovn_controller[145108]: + sudo kolla_copy_cacerts
Jan 31 06:03:43 compute-0 edpm-start-podman-container[145092]: Creating additional drop-in dependency for "ovn_controller" (1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695)
Jan 31 06:03:43 compute-0 podman[145115]: 2026-01-31 06:03:43.842643894 +0000 UTC m=+0.235936475 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 06:03:43 compute-0 systemd[1]: 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695-eb978ee1992dc8a.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 06:03:43 compute-0 systemd[1]: 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695-eb978ee1992dc8a.service: Failed with result 'exit-code'.
Jan 31 06:03:43 compute-0 systemd[1]: Reloading.
Jan 31 06:03:43 compute-0 systemd-rc-local-generator[145198]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:03:43 compute-0 systemd-sysv-generator[145204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:03:44 compute-0 systemd[1]: Started ovn_controller container.
Jan 31 06:03:44 compute-0 systemd[1]: Started Session c2 of User root.
Jan 31 06:03:44 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: + [[ ! -n '' ]]
Jan 31 06:03:44 compute-0 ovn_controller[145108]: + . kolla_extend_start
Jan 31 06:03:44 compute-0 ovn_controller[145108]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 06:03:44 compute-0 ovn_controller[145108]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 06:03:44 compute-0 ovn_controller[145108]: + umask 0022
Jan 31 06:03:44 compute-0 ovn_controller[145108]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 06:03:44 compute-0 sudo[145050]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.1836] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.1844] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <warn>  [1769839424.1846] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.1855] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.1862] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.1866] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 06:03:44 compute-0 kernel: br-int: entered promiscuous mode
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 06:03:44 compute-0 systemd-udevd[145241]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:03:44 compute-0 ovn_controller[145108]: 2026-01-31T06:03:44Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.2446] manager: (ovn-27de50-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 06:03:44 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.2591] device (genev_sys_6081): carrier: link connected
Jan 31 06:03:44 compute-0 NetworkManager[49039]: <info>  [1769839424.2596] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 06:03:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:03:44
Jan 31 06:03:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:03:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:03:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups']
Jan 31 06:03:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:03:44 compute-0 ceph-mon[75251]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:44 compute-0 python3.9[145371]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:03:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:03:45 compute-0 sudo[145521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wihxaxlkhjzqzwnfbqkugnkdpskmqjft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839425.3144846-619-256489470313632/AnsiballZ_stat.py'
Jan 31 06:03:45 compute-0 sudo[145521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:45 compute-0 python3.9[145523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:03:45 compute-0 sudo[145521]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:45 compute-0 ceph-mon[75251]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:46 compute-0 sudo[145644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iipsstipekxsonpvtuuzfeblbfictpym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839425.3144846-619-256489470313632/AnsiballZ_copy.py'
Jan 31 06:03:46 compute-0 sudo[145644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:46 compute-0 python3.9[145646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839425.3144846-619-256489470313632/.source.yaml _original_basename=.tveickc8 follow=False checksum=af4d5ebfcedff60dee2a56edaab173487e494850 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:03:46 compute-0 sudo[145644]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:46 compute-0 sudo[145671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:03:46 compute-0 sudo[145671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:46 compute-0 sudo[145671]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:46 compute-0 sudo[145719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:03:46 compute-0 sudo[145719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:46 compute-0 sudo[145853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmtzrbzehvrhssguospufaxizsiizelr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839426.4097092-634-179177991269504/AnsiballZ_command.py'
Jan 31 06:03:46 compute-0 sudo[145853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:46 compute-0 python3.9[145860]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:46 compute-0 ovs-vsctl[145880]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 06:03:46 compute-0 sudo[145719]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:46 compute-0 sudo[145853]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:03:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:03:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:03:46 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:03:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:03:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:03:47 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:03:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:03:47 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:03:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:03:47 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:03:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:03:47 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:03:47 compute-0 sudo[145928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:03:47 compute-0 sudo[145928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:47 compute-0 sudo[145928]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:47 compute-0 sudo[145976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:03:47 compute-0 sudo[145976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:47 compute-0 sudo[146080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipaitzuhvrcxnfbapbhowvqbslnizeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839427.0293577-642-209215109512223/AnsiballZ_command.py'
Jan 31 06:03:47 compute-0 sudo[146080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.322236468 +0000 UTC m=+0.019445186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:47 compute-0 python3.9[146082]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.486183444 +0000 UTC m=+0.183392182 container create a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:03:47 compute-0 ovs-vsctl[146110]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 06:03:47 compute-0 sudo[146080]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:47 compute-0 systemd[1]: Started libpod-conmon-a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a.scope.
Jan 31 06:03:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.716951495 +0000 UTC m=+0.414160193 container init a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.725170105 +0000 UTC m=+0.422378843 container start a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:03:47 compute-0 festive_ishizaka[146138]: 167 167
Jan 31 06:03:47 compute-0 systemd[1]: libpod-a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a.scope: Deactivated successfully.
Jan 31 06:03:47 compute-0 conmon[146138]: conmon a5d33b721963a1ddb362 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a.scope/container/memory.events
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.888035031 +0000 UTC m=+0.585243729 container attach a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 06:03:47 compute-0 podman[146095]: 2026-01-31 06:03:47.888750121 +0000 UTC m=+0.585958819 container died a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-39117e3e8bb5ed3849321e5899964a47bd1d68e1a922c601da25a508c2cbbc1a-merged.mount: Deactivated successfully.
Jan 31 06:03:48 compute-0 sudo[146280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhndeaxclfwnyrlzprtmqzcwwddmawvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839427.825201-656-50151925907771/AnsiballZ_command.py'
Jan 31 06:03:48 compute-0 sudo[146280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:03:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:03:48 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:03:48 compute-0 ceph-mon[75251]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:48 compute-0 python3.9[146282]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:03:48 compute-0 ovs-vsctl[146283]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 06:03:48 compute-0 sudo[146280]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:48 compute-0 podman[146095]: 2026-01-31 06:03:48.398706598 +0000 UTC m=+1.095915296 container remove a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:03:48 compute-0 systemd[1]: libpod-conmon-a5d33b721963a1ddb3620e8eb0700c7bd2b9ffba06f4d973e59067f19f142f7a.scope: Deactivated successfully.
Jan 31 06:03:48 compute-0 podman[146315]: 2026-01-31 06:03:48.57104873 +0000 UTC m=+0.048496141 container create 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:03:48 compute-0 systemd[1]: Started libpod-conmon-7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca.scope.
Jan 31 06:03:48 compute-0 podman[146315]: 2026-01-31 06:03:48.548620971 +0000 UTC m=+0.026068422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:48 compute-0 podman[146315]: 2026-01-31 06:03:48.67806055 +0000 UTC m=+0.155507991 container init 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:03:48 compute-0 podman[146315]: 2026-01-31 06:03:48.68845037 +0000 UTC m=+0.165897781 container start 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:03:48 compute-0 podman[146315]: 2026-01-31 06:03:48.692502274 +0000 UTC m=+0.169949685 container attach 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:48 compute-0 sshd-session[134293]: Connection closed by 192.168.122.30 port 51064
Jan 31 06:03:48 compute-0 sshd-session[134290]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:03:48 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 06:03:48 compute-0 systemd[1]: session-45.scope: Consumed 48.989s CPU time.
Jan 31 06:03:48 compute-0 systemd-logind[797]: Session 45 logged out. Waiting for processes to exit.
Jan 31 06:03:48 compute-0 systemd-logind[797]: Removed session 45.
Jan 31 06:03:49 compute-0 lucid_herschel[146332]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:03:49 compute-0 lucid_herschel[146332]: --> All data devices are unavailable
Jan 31 06:03:49 compute-0 systemd[1]: libpod-7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca.scope: Deactivated successfully.
Jan 31 06:03:49 compute-0 podman[146315]: 2026-01-31 06:03:49.182768219 +0000 UTC m=+0.660215680 container died 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89bf32ae3fe6dec9d4a435b8095a395eedd0e721fb7fc68dec34de726de4ac3-merged.mount: Deactivated successfully.
Jan 31 06:03:49 compute-0 podman[146315]: 2026-01-31 06:03:49.319902914 +0000 UTC m=+0.797350315 container remove 7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_herschel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:03:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:49 compute-0 systemd[1]: libpod-conmon-7d53169b1778d8b581fb913f69171b6c0ba915c9e14ef60c7fcdb5a51da54fca.scope: Deactivated successfully.
Jan 31 06:03:49 compute-0 sudo[145976]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:49 compute-0 sudo[146366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:03:49 compute-0 sudo[146366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:49 compute-0 sudo[146366]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:49 compute-0 sudo[146391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:03:49 compute-0 sudo[146391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:49 compute-0 podman[146428]: 2026-01-31 06:03:49.726452522 +0000 UTC m=+0.021257347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:49 compute-0 podman[146428]: 2026-01-31 06:03:49.971278826 +0000 UTC m=+0.266083611 container create 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:03:50 compute-0 systemd[1]: Started libpod-conmon-07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621.scope.
Jan 31 06:03:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:50 compute-0 podman[146428]: 2026-01-31 06:03:50.541281386 +0000 UTC m=+0.836086171 container init 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 06:03:50 compute-0 podman[146428]: 2026-01-31 06:03:50.551092432 +0000 UTC m=+0.845897257 container start 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 06:03:50 compute-0 gifted_shtern[146445]: 167 167
Jan 31 06:03:50 compute-0 systemd[1]: libpod-07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621.scope: Deactivated successfully.
Jan 31 06:03:51 compute-0 ceph-mon[75251]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:51 compute-0 podman[146428]: 2026-01-31 06:03:51.083036165 +0000 UTC m=+1.377840960 container attach 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:03:51 compute-0 podman[146428]: 2026-01-31 06:03:51.083651853 +0000 UTC m=+1.378456658 container died 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-912161171afe31e0f07ff57b5ceba1927478a1ac85d7ebc6651d1d633046a6f7-merged.mount: Deactivated successfully.
Jan 31 06:03:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:51 compute-0 podman[146428]: 2026-01-31 06:03:51.669343184 +0000 UTC m=+1.964148009 container remove 07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:03:51 compute-0 systemd[1]: libpod-conmon-07684826f034672d18e4f5b45efdf81d697c013cf1340b6f838f1eb89452d621.scope: Deactivated successfully.
Jan 31 06:03:51 compute-0 podman[146469]: 2026-01-31 06:03:51.824950436 +0000 UTC m=+0.033174581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:51 compute-0 podman[146469]: 2026-01-31 06:03:51.942916234 +0000 UTC m=+0.151140369 container create c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:03:52 compute-0 systemd[1]: Started libpod-conmon-c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e.scope.
Jan 31 06:03:52 compute-0 ceph-mon[75251]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335dc2d6cb8ac4bebbae32aad016c71ada50a208b0161819008059ab385ee5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335dc2d6cb8ac4bebbae32aad016c71ada50a208b0161819008059ab385ee5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335dc2d6cb8ac4bebbae32aad016c71ada50a208b0161819008059ab385ee5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335dc2d6cb8ac4bebbae32aad016c71ada50a208b0161819008059ab385ee5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:52 compute-0 podman[146469]: 2026-01-31 06:03:52.207570884 +0000 UTC m=+0.415794999 container init c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:03:52 compute-0 podman[146469]: 2026-01-31 06:03:52.216027251 +0000 UTC m=+0.424251356 container start c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:03:52 compute-0 podman[146469]: 2026-01-31 06:03:52.220334522 +0000 UTC m=+0.428558627 container attach c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]: {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     "0": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "devices": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "/dev/loop3"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             ],
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_name": "ceph_lv0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_size": "21470642176",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "name": "ceph_lv0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "tags": {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_name": "ceph",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.crush_device_class": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.encrypted": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.objectstore": "bluestore",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_id": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.vdo": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.with_tpm": "0"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             },
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "vg_name": "ceph_vg0"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         }
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     ],
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     "1": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "devices": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "/dev/loop4"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             ],
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_name": "ceph_lv1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_size": "21470642176",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "name": "ceph_lv1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "tags": {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_name": "ceph",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.crush_device_class": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.encrypted": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.objectstore": "bluestore",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_id": "1",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.vdo": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.with_tpm": "0"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             },
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "vg_name": "ceph_vg1"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         }
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     ],
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     "2": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "devices": [
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "/dev/loop5"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             ],
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_name": "ceph_lv2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_size": "21470642176",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "name": "ceph_lv2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "tags": {
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.cluster_name": "ceph",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.crush_device_class": "",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.encrypted": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.objectstore": "bluestore",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osd_id": "2",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.vdo": "0",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:                 "ceph.with_tpm": "0"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             },
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "type": "block",
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:             "vg_name": "ceph_vg2"
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:         }
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]:     ]
Jan 31 06:03:52 compute-0 vigilant_chandrasekhar[146486]: }
Jan 31 06:03:52 compute-0 systemd[1]: libpod-c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e.scope: Deactivated successfully.
Jan 31 06:03:52 compute-0 podman[146469]: 2026-01-31 06:03:52.52814635 +0000 UTC m=+0.736370435 container died c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 06:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4335dc2d6cb8ac4bebbae32aad016c71ada50a208b0161819008059ab385ee5f-merged.mount: Deactivated successfully.
Jan 31 06:03:52 compute-0 podman[146469]: 2026-01-31 06:03:52.568942844 +0000 UTC m=+0.777166979 container remove c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:03:52 compute-0 systemd[1]: libpod-conmon-c68537e05518a86149babd966366595f151c2b4e76d56e8a710eed38e7a50e4e.scope: Deactivated successfully.
Jan 31 06:03:52 compute-0 sudo[146391]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:52 compute-0 sudo[146509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:03:52 compute-0 sudo[146509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:52 compute-0 sudo[146509]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:52 compute-0 sudo[146534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:03:52 compute-0 sudo[146534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:52 compute-0 podman[146572]: 2026-01-31 06:03:52.984301119 +0000 UTC m=+0.036195225 container create 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:53 compute-0 systemd[1]: Started libpod-conmon-647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b.scope.
Jan 31 06:03:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:53.046618176 +0000 UTC m=+0.098512312 container init 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:53.051270907 +0000 UTC m=+0.103165013 container start 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:53 compute-0 exciting_proskuriakova[146589]: 167 167
Jan 31 06:03:53 compute-0 systemd[1]: libpod-647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b.scope: Deactivated successfully.
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:53.057016068 +0000 UTC m=+0.108910194 container attach 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:53.057305956 +0000 UTC m=+0.109200062 container died 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:52.964525675 +0000 UTC m=+0.016419811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-81772d9557fcaff1578269f9532edafdc90f2f578d6cc66f1abcbfb15432f2a1-merged.mount: Deactivated successfully.
Jan 31 06:03:53 compute-0 podman[146572]: 2026-01-31 06:03:53.087709218 +0000 UTC m=+0.139603324 container remove 647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:03:53 compute-0 systemd[1]: libpod-conmon-647850e1dc2b49febc42c676378e5def6a995cefb1a1d1f85f78a92abc4ac01b.scope: Deactivated successfully.
Jan 31 06:03:53 compute-0 podman[146612]: 2026-01-31 06:03:53.221341245 +0000 UTC m=+0.040945059 container create f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:03:53 compute-0 systemd[1]: Started libpod-conmon-f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242.scope.
Jan 31 06:03:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12fc6daa1519b8e7f2c92d5ec746880a872cc26f01ebae6c1777868c88dd6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12fc6daa1519b8e7f2c92d5ec746880a872cc26f01ebae6c1777868c88dd6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12fc6daa1519b8e7f2c92d5ec746880a872cc26f01ebae6c1777868c88dd6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12fc6daa1519b8e7f2c92d5ec746880a872cc26f01ebae6c1777868c88dd6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:03:53 compute-0 podman[146612]: 2026-01-31 06:03:53.200599313 +0000 UTC m=+0.020203117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:03:53 compute-0 podman[146612]: 2026-01-31 06:03:53.310439073 +0000 UTC m=+0.130042897 container init f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:03:53 compute-0 podman[146612]: 2026-01-31 06:03:53.316287377 +0000 UTC m=+0.135891191 container start f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:03:53 compute-0 podman[146612]: 2026-01-31 06:03:53.320096954 +0000 UTC m=+0.139700738 container attach f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:03:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:03:53 compute-0 lvm[146705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:03:53 compute-0 lvm[146705]: VG ceph_vg0 finished
Jan 31 06:03:53 compute-0 lvm[146708]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:03:53 compute-0 lvm[146708]: VG ceph_vg1 finished
Jan 31 06:03:53 compute-0 lvm[146710]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:03:53 compute-0 lvm[146710]: VG ceph_vg2 finished
Jan 31 06:03:54 compute-0 determined_leavitt[146629]: {}
Jan 31 06:03:54 compute-0 systemd[1]: libpod-f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242.scope: Deactivated successfully.
Jan 31 06:03:54 compute-0 podman[146612]: 2026-01-31 06:03:54.087965642 +0000 UTC m=+0.907569416 container died f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d12fc6daa1519b8e7f2c92d5ec746880a872cc26f01ebae6c1777868c88dd6f1-merged.mount: Deactivated successfully.
Jan 31 06:03:54 compute-0 podman[146612]: 2026-01-31 06:03:54.129052104 +0000 UTC m=+0.948655908 container remove f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:03:54 compute-0 systemd[1]: libpod-conmon-f48b26edf686e96ad182aeec03d189f19a6e5167e716ed33fe8b570449f0d242.scope: Deactivated successfully.
Jan 31 06:03:54 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 31 06:03:54 compute-0 systemd[145126]: Activating special unit Exit the Session...
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped target Main User Target.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped target Basic System.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped target Paths.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped target Sockets.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped target Timers.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 06:03:54 compute-0 systemd[145126]: Closed D-Bus User Message Bus Socket.
Jan 31 06:03:54 compute-0 systemd[145126]: Stopped Create User's Volatile Files and Directories.
Jan 31 06:03:54 compute-0 systemd[145126]: Removed slice User Application Slice.
Jan 31 06:03:54 compute-0 systemd[145126]: Reached target Shutdown.
Jan 31 06:03:54 compute-0 systemd[145126]: Finished Exit the Session.
Jan 31 06:03:54 compute-0 systemd[145126]: Reached target Exit the Session.
Jan 31 06:03:54 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 06:03:54 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 31 06:03:54 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 06:03:54 compute-0 sudo[146534]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:03:54 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 06:03:54 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 06:03:54 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 06:03:54 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 06:03:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:03:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:54 compute-0 sudo[146726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:03:54 compute-0 sudo[146726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:03:54 compute-0 sudo[146726]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:54 compute-0 ceph-mon[75251]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:03:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:03:55 compute-0 sshd-session[146751]: Accepted publickey for zuul from 192.168.122.30 port 36498 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:03:55 compute-0 systemd-logind[797]: New session 47 of user zuul.
Jan 31 06:03:55 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 31 06:03:55 compute-0 sshd-session[146751]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:03:56 compute-0 ceph-mon[75251]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.301824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438301893, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 840, "num_deletes": 251, "total_data_size": 1179587, "memory_usage": 1197904, "flush_reason": "Manual Compaction"}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438343868, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1158305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9012, "largest_seqno": 9851, "table_properties": {"data_size": 1154126, "index_size": 1895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8854, "raw_average_key_size": 18, "raw_value_size": 1145720, "raw_average_value_size": 2406, "num_data_blocks": 88, "num_entries": 476, "num_filter_entries": 476, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769839363, "oldest_key_time": 1769839363, "file_creation_time": 1769839438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 42080 microseconds, and 3099 cpu microseconds.
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:03:58 compute-0 ceph-mon[75251]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.343914) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1158305 bytes OK
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.343935) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.359160) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.359210) EVENT_LOG_v1 {"time_micros": 1769839438359201, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.359236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1175448, prev total WAL file size 1176603, number of live WAL files 2.
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.359918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1131KB)], [23(7014KB)]
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438360042, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8341173, "oldest_snapshot_seqno": -1}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:03:58 compute-0 python3.9[146904]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3372 keys, 6558066 bytes, temperature: kUnknown
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438442344, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6558066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6532935, "index_size": 15614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81744, "raw_average_key_size": 24, "raw_value_size": 6469380, "raw_average_value_size": 1918, "num_data_blocks": 680, "num_entries": 3372, "num_filter_entries": 3372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769839438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.442573) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6558066 bytes
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.446075) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.3 rd, 79.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.9 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(12.9) write-amplify(5.7) OK, records in: 3886, records dropped: 514 output_compression: NoCompression
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.446107) EVENT_LOG_v1 {"time_micros": 1769839438446093, "job": 8, "event": "compaction_finished", "compaction_time_micros": 82345, "compaction_time_cpu_micros": 20562, "output_level": 6, "num_output_files": 1, "total_output_size": 6558066, "num_input_records": 3886, "num_output_records": 3372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438446401, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839438447076, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.359781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.447106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.447124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.447126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.447127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:58 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:03:58.447129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:03:59 compute-0 sudo[147059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htfjtdzwzxbyzvwaphfthmevlydyokgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839438.8806543-29-103629394600536/AnsiballZ_file.py'
Jan 31 06:03:59 compute-0 sudo[147059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:03:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:03:59 compute-0 python3.9[147061]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:03:59 compute-0 sudo[147059]: pam_unix(sudo:session): session closed for user root
Jan 31 06:03:59 compute-0 sudo[147211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjxzdgycmobuuwegkbmhkpjlqndstfxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839439.5816815-29-239549012954248/AnsiballZ_file.py'
Jan 31 06:03:59 compute-0 sudo[147211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:00 compute-0 python3.9[147213]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:00 compute-0 sudo[147211]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:00 compute-0 sudo[147363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeczjeewsmebrgepxcfhaqvccltftfsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839440.1524532-29-163466981970566/AnsiballZ_file.py'
Jan 31 06:04:00 compute-0 sudo[147363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:00 compute-0 ceph-mon[75251]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:00 compute-0 python3.9[147365]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:00 compute-0 sudo[147363]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:00 compute-0 sudo[147515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvrwzwiuwipawmnjflsknzgjjkjwnysn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839440.656332-29-240261538481307/AnsiballZ_file.py'
Jan 31 06:04:00 compute-0 sudo[147515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:01 compute-0 python3.9[147517]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:01 compute-0 sudo[147515]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:01 compute-0 sudo[147667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvaibbflgmabkltuukwgentypsafdiik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839441.1855164-29-80514566059539/AnsiballZ_file.py'
Jan 31 06:04:01 compute-0 sudo[147667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:01 compute-0 python3.9[147669]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:01 compute-0 sudo[147667]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:01 compute-0 ceph-mon[75251]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:02 compute-0 python3.9[147819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:04:02 compute-0 sudo[147970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimixoujntoonfxmsgdchlpcccdxgkgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839442.4489903-73-97863103676987/AnsiballZ_seboolean.py'
Jan 31 06:04:02 compute-0 sudo[147970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:02 compute-0 python3.9[147972]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 06:04:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:03 compute-0 sudo[147970]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:04 compute-0 python3.9[148122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:04 compute-0 ceph-mon[75251]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:04 compute-0 python3.9[148243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839443.6620662-81-152081358707380/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:05 compute-0 python3.9[148393]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:05 compute-0 python3.9[148514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839444.9973023-96-39455518866839/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:06 compute-0 sudo[148664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtnozzhedlnhgczabdeyruibokremsww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839446.046604-113-252589810754616/AnsiballZ_setup.py'
Jan 31 06:04:06 compute-0 sudo[148664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:06 compute-0 ceph-mon[75251]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:06 compute-0 python3.9[148666]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:04:06 compute-0 sudo[148664]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:07 compute-0 sudo[148748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzlcesnbxvqltjdbtgqhzwidwdfrlhnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839446.046604-113-252589810754616/AnsiballZ_dnf.py'
Jan 31 06:04:07 compute-0 sudo[148748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:07 compute-0 python3.9[148750]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:04:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:08 compute-0 ceph-mon[75251]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:08 compute-0 sudo[148748]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:09 compute-0 sudo[148901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oisjzwoswfptjefsweixzmntlddebfwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839448.7307668-125-25318996508819/AnsiballZ_systemd.py'
Jan 31 06:04:09 compute-0 sudo[148901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:09 compute-0 python3.9[148903]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:04:09 compute-0 sudo[148901]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:10 compute-0 python3.9[149056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:10 compute-0 ceph-mon[75251]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:10 compute-0 python3.9[149177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839449.89384-133-265788994375777/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:11 compute-0 python3.9[149327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:11 compute-0 python3.9[149448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839450.8800626-133-250370626688480/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:12 compute-0 ceph-mon[75251]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:12 compute-0 python3.9[149598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:13 compute-0 python3.9[149719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839452.3877552-177-68528749049947/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:13 compute-0 python3.9[149869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:14 compute-0 ovn_controller[145108]: 2026-01-31T06:04:14Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Jan 31 06:04:14 compute-0 ovn_controller[145108]: 2026-01-31T06:04:14Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 06:04:14 compute-0 podman[149964]: 2026-01-31 06:04:14.116994572 +0000 UTC m=+0.091046373 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:04:14 compute-0 python3.9[150003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839453.4137564-177-12657425583488/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:14 compute-0 ceph-mon[75251]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:14 compute-0 python3.9[150166]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:04:15 compute-0 sudo[150318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlnvdiuscwhdwwbgltifaqssqiskljas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839455.0360723-215-31845432823252/AnsiballZ_file.py'
Jan 31 06:04:15 compute-0 sudo[150318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:15 compute-0 python3.9[150320]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:15 compute-0 sudo[150318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:15 compute-0 sudo[150470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrpkxaymsofuxeylynenefvvkeegvtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839455.6601639-223-134811152183733/AnsiballZ_stat.py'
Jan 31 06:04:15 compute-0 sudo[150470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:16 compute-0 python3.9[150472]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:16 compute-0 sudo[150470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:16 compute-0 sudo[150548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikoaxmubgbhgiqmjtbxgaujgsklkroox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839455.6601639-223-134811152183733/AnsiballZ_file.py'
Jan 31 06:04:16 compute-0 sudo[150548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:16 compute-0 ceph-mon[75251]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:16 compute-0 python3.9[150550]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:16 compute-0 sudo[150548]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:17 compute-0 sudo[150700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpigankaipcjncjncydfuprdjnllydyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839456.8323958-223-257307427363411/AnsiballZ_stat.py'
Jan 31 06:04:17 compute-0 sudo[150700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:17 compute-0 python3.9[150702]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:17 compute-0 sudo[150700]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:17 compute-0 sudo[150778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgovzvqluenoxwlscttrajllutpepumo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839456.8323958-223-257307427363411/AnsiballZ_file.py'
Jan 31 06:04:17 compute-0 sudo[150778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:17 compute-0 python3.9[150780]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:17 compute-0 sudo[150778]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:18 compute-0 sudo[150930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkmlprwrpuvebwvlphalxufgxelokvox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839457.8141081-246-18966229524184/AnsiballZ_file.py'
Jan 31 06:04:18 compute-0 sudo[150930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:18 compute-0 python3.9[150932]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:18 compute-0 sudo[150930]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:18 compute-0 sudo[151082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpffkebismzflljrrzacdfugtrprrgix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839458.4752486-254-156516015850002/AnsiballZ_stat.py'
Jan 31 06:04:18 compute-0 sudo[151082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:18 compute-0 ceph-mon[75251]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:18 compute-0 python3.9[151084]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:18 compute-0 sudo[151082]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:19 compute-0 sudo[151160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqyxvspvifsexfpsaxfqkofqssepeeni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839458.4752486-254-156516015850002/AnsiballZ_file.py'
Jan 31 06:04:19 compute-0 sudo[151160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:19 compute-0 python3.9[151162]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:19 compute-0 sudo[151160]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:19 compute-0 ceph-mon[75251]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:19 compute-0 sudo[151312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdmnysfmhprvapnkqifnxeoniaxbsfwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839459.544693-266-134873469977714/AnsiballZ_stat.py'
Jan 31 06:04:19 compute-0 sudo[151312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:19 compute-0 python3.9[151314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:20 compute-0 sudo[151312]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:20 compute-0 sudo[151390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdyfxsjsbspsklnkeqkukeloxdfhdmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839459.544693-266-134873469977714/AnsiballZ_file.py'
Jan 31 06:04:20 compute-0 sudo[151390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:20 compute-0 python3.9[151392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:20 compute-0 sudo[151390]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:20 compute-0 sudo[151542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvuhozbrthekthchvfvbvmnhpuursqhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839460.5580432-278-205793846346142/AnsiballZ_systemd.py'
Jan 31 06:04:20 compute-0 sudo[151542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:21 compute-0 python3.9[151544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:04:21 compute-0 systemd[1]: Reloading.
Jan 31 06:04:21 compute-0 systemd-rc-local-generator[151569]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:04:21 compute-0 systemd-sysv-generator[151573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:04:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:21 compute-0 sudo[151542]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:21 compute-0 sudo[151732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vckkgdisqvgfvvikziqnaaurulcoluyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839461.6165576-286-103091435487125/AnsiballZ_stat.py'
Jan 31 06:04:21 compute-0 sudo[151732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:22 compute-0 python3.9[151734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:22 compute-0 sudo[151732]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:22 compute-0 sudo[151810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itskgjqfyjsiwryqdfsxuvtapbmulffe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839461.6165576-286-103091435487125/AnsiballZ_file.py'
Jan 31 06:04:22 compute-0 sudo[151810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:22 compute-0 ceph-mon[75251]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:22 compute-0 python3.9[151812]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:22 compute-0 sudo[151810]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:22 compute-0 sudo[151962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllvroeeclvtpifhzaqmuizucnexhoai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839462.7478013-298-4794136644058/AnsiballZ_stat.py'
Jan 31 06:04:22 compute-0 sudo[151962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:23 compute-0 python3.9[151964]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:23 compute-0 sudo[151962]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:23 compute-0 sudo[152040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlpzpbsthkzsiodrjidphbvpfdpqlrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839462.7478013-298-4794136644058/AnsiballZ_file.py'
Jan 31 06:04:23 compute-0 sudo[152040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:23 compute-0 python3.9[152042]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:23 compute-0 sudo[152040]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:24 compute-0 sudo[152192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbyonajqmoulodjxerbntbpphfbbjuaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839463.8452227-310-167213892339642/AnsiballZ_systemd.py'
Jan 31 06:04:24 compute-0 sudo[152192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:24 compute-0 python3.9[152194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:04:24 compute-0 ceph-mon[75251]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:24 compute-0 systemd[1]: Reloading.
Jan 31 06:04:24 compute-0 systemd-sysv-generator[152219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:04:24 compute-0 systemd-rc-local-generator[152212]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:04:24 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 06:04:24 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 06:04:24 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 06:04:24 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 06:04:24 compute-0 sudo[152192]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:25 compute-0 sudo[152385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptcoknjlkoacqjimmjrlyfxzydlpbrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839464.9885552-320-196806314348283/AnsiballZ_file.py'
Jan 31 06:04:25 compute-0 sudo[152385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:25 compute-0 python3.9[152387]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:25 compute-0 sudo[152385]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:25 compute-0 sudo[152537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqlxaxwwmxehlpvtnyrdrcfwphslyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839465.6793094-328-224745809220333/AnsiballZ_stat.py'
Jan 31 06:04:25 compute-0 sudo[152537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:26 compute-0 python3.9[152539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:26 compute-0 sudo[152537]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:26 compute-0 ceph-mon[75251]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:26 compute-0 sudo[152660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqjyuyadpaqpqsoavfowbapsdtuwsgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839465.6793094-328-224745809220333/AnsiballZ_copy.py'
Jan 31 06:04:26 compute-0 sudo[152660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:26 compute-0 python3.9[152662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839465.6793094-328-224745809220333/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:26 compute-0 sudo[152660]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:27 compute-0 sudo[152812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxjfkrqenzcusxwwglknafdqepaynxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839466.9717722-345-208811120946996/AnsiballZ_file.py'
Jan 31 06:04:27 compute-0 sudo[152812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:27 compute-0 python3.9[152814]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:27 compute-0 sudo[152812]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:27 compute-0 sudo[152964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpafyhxmyukmngxiufkdsagzuoehaori ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839467.6865296-353-15235602841624/AnsiballZ_file.py'
Jan 31 06:04:27 compute-0 sudo[152964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:28 compute-0 python3.9[152966]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:04:28 compute-0 sudo[152964]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:28 compute-0 ceph-mon[75251]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:28 compute-0 sudo[153116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvfpxblgqfeyxjrakxdgdhmqqkhball ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839468.3370337-361-10748578496270/AnsiballZ_stat.py'
Jan 31 06:04:28 compute-0 sudo[153116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:28 compute-0 python3.9[153118]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:28 compute-0 sudo[153116]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:29 compute-0 sudo[153239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eufqbjcsyqighqhumgostxysimontbpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839468.3370337-361-10748578496270/AnsiballZ_copy.py'
Jan 31 06:04:29 compute-0 sudo[153239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:29 compute-0 python3.9[153241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839468.3370337-361-10748578496270/.source.json _original_basename=.kgy5gmee follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:29 compute-0 sudo[153239]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:29 compute-0 python3.9[153391]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:29 compute-0 ceph-mon[75251]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:31 compute-0 sudo[153812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybtfytdubgmvyvnojiszjhaxlcidepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839471.192428-401-178281985520458/AnsiballZ_container_config_data.py'
Jan 31 06:04:31 compute-0 sudo[153812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:31 compute-0 python3.9[153814]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 06:04:31 compute-0 sudo[153812]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:32 compute-0 ceph-mon[75251]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:32 compute-0 sudo[153964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcnimaptzxtsufzppqskycecmownigtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839472.2134204-412-109258989408876/AnsiballZ_container_config_hash.py'
Jan 31 06:04:32 compute-0 sudo[153964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:32 compute-0 python3.9[153966]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:04:32 compute-0 sudo[153964]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:33 compute-0 sudo[154117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrmxdwiobmqcaxjpkcyojqnatghahof ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839473.1388357-422-45147509768280/AnsiballZ_edpm_container_manage.py'
Jan 31 06:04:33 compute-0 sudo[154117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:33 compute-0 python3[154119]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:04:34 compute-0 ceph-mon[75251]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:36 compute-0 ceph-mon[75251]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:37 compute-0 ceph-mon[75251]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:04:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5604 writes, 25K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5604 writes, 861 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5604 writes, 25K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s
                                           Interval WAL: 5604 writes, 861 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:04:40 compute-0 ceph-mon[75251]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:42 compute-0 ceph-mon[75251]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:43 compute-0 ceph-mon[75251]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:43 compute-0 podman[154132]: 2026-01-31 06:04:43.858645078 +0000 UTC m=+9.951583744 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 06:04:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:04:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 8312 writes, 34K keys, 8312 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8312 writes, 1633 syncs, 5.09 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8312 writes, 34K keys, 8312 commit groups, 1.0 writes per commit group, ingest: 21.26 MB, 0.04 MB/s
                                           Interval WAL: 8312 writes, 1633 syncs, 5.09 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:04:44 compute-0 podman[154259]: 2026-01-31 06:04:44.047069634 +0000 UTC m=+0.054473269 container create 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:04:44 compute-0 podman[154259]: 2026-01-31 06:04:44.022846373 +0000 UTC m=+0.030250028 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 06:04:44 compute-0 python3[154119]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 06:04:44 compute-0 sudo[154117]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:04:44
Jan 31 06:04:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:04:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:04:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', 'backups', '.rgw.root', '.mgr']
Jan 31 06:04:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:04:44 compute-0 sudo[154458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeddplyuwzgzrarkidvxxuzngmjlvcrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839484.3091168-430-127590154940484/AnsiballZ_stat.py'
Jan 31 06:04:44 compute-0 sudo[154458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:44 compute-0 podman[154421]: 2026-01-31 06:04:44.661873483 +0000 UTC m=+0.116705461 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:04:44 compute-0 python3.9[154462]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:04:44 compute-0 sudo[154458]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:45 compute-0 sudo[154627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtjddqtghurkumvalgigcyiifdjhkwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839485.0643733-439-112181141246047/AnsiballZ_file.py'
Jan 31 06:04:45 compute-0 sudo[154627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:04:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:04:45 compute-0 python3.9[154629]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:45 compute-0 sudo[154627]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:45 compute-0 sudo[154703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exreunbcpsvmziqxjjgcxqkucyroidyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839485.0643733-439-112181141246047/AnsiballZ_stat.py'
Jan 31 06:04:45 compute-0 sudo[154703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:46 compute-0 python3.9[154705]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:04:46 compute-0 sudo[154703]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:46 compute-0 ceph-mon[75251]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:46 compute-0 sudo[154854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znbwmulshkmgdjnsarlubgekfkhfkbui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839486.0762815-439-218443663042706/AnsiballZ_copy.py'
Jan 31 06:04:46 compute-0 sudo[154854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:46 compute-0 python3.9[154856]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839486.0762815-439-218443663042706/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:46 compute-0 sudo[154854]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:46 compute-0 sudo[154930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehefutqtkkvpprqhvjllhqvzcyuwmypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839486.0762815-439-218443663042706/AnsiballZ_systemd.py'
Jan 31 06:04:46 compute-0 sudo[154930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:47 compute-0 python3.9[154932]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:04:47 compute-0 systemd[1]: Reloading.
Jan 31 06:04:47 compute-0 systemd-sysv-generator[154964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:04:47 compute-0 systemd-rc-local-generator[154960]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:04:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:47 compute-0 sudo[154930]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:47 compute-0 sudo[155042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahxsgpxgfjsmlvadhgpoefzcxsrbtnpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839486.0762815-439-218443663042706/AnsiballZ_systemd.py'
Jan 31 06:04:47 compute-0 sudo[155042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:47 compute-0 python3.9[155044]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:04:47 compute-0 systemd[1]: Reloading.
Jan 31 06:04:48 compute-0 systemd-rc-local-generator[155068]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:04:48 compute-0 systemd-sysv-generator[155072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:04:48 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 06:04:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821380459b7fc80aa7eabab1d2b9a202ad643893d106a2afcfaebaa94c7d5763/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821380459b7fc80aa7eabab1d2b9a202ad643893d106a2afcfaebaa94c7d5763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:48 compute-0 ceph-mon[75251]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08.
Jan 31 06:04:48 compute-0 podman[155085]: 2026-01-31 06:04:48.47174549 +0000 UTC m=+0.185821495 container init 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + sudo -E kolla_set_configs
Jan 31 06:04:48 compute-0 podman[155085]: 2026-01-31 06:04:48.500838925 +0000 UTC m=+0.214914900 container start 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:04:48 compute-0 edpm-start-podman-container[155085]: ovn_metadata_agent
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Validating config file
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Copying service configuration files
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Writing out command to execute
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 06:04:48 compute-0 edpm-start-podman-container[155084]: Creating additional drop-in dependency for "ovn_metadata_agent" (5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08)
Jan 31 06:04:48 compute-0 podman[155107]: 2026-01-31 06:04:48.569044653 +0000 UTC m=+0.061937505 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: ++ cat /run_command
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + CMD=neutron-ovn-metadata-agent
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + ARGS=
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + sudo kolla_copy_cacerts
Jan 31 06:04:48 compute-0 systemd[1]: Reloading.
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + [[ ! -n '' ]]
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + . kolla_extend_start
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + umask 0022
Jan 31 06:04:48 compute-0 ovn_metadata_agent[155100]: + exec neutron-ovn-metadata-agent
Jan 31 06:04:48 compute-0 systemd-rc-local-generator[155170]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:04:48 compute-0 systemd-sysv-generator[155174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:04:48 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 31 06:04:48 compute-0 sudo[155042]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:49 compute-0 python3.9[155337]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 06:04:50 compute-0 ceph-mon[75251]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.161 155105 INFO neutron.common.config [-] Logging enabled!
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.161 155105 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.162 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.163 155105 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.164 155105 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.165 155105 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.166 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.167 155105 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.168 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.169 155105 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.170 155105 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.171 155105 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.172 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.173 155105 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.174 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.175 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.176 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.177 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.178 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.179 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.180 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.181 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.182 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.183 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.184 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.185 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.186 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.187 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.188 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.189 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.190 155105 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.191 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.192 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.193 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.194 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.195 155105 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.204 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.204 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.204 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.204 155105 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.205 155105 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.217 155105 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bf4b4a34-237c-4fe2-88ca-4e5346644b6b (UUID: bf4b4a34-237c-4fe2-88ca-4e5346644b6b) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.245 155105 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.245 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.245 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.245 155105 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.248 155105 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.253 155105 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.259 155105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bf4b4a34-237c-4fe2-88ca-4e5346644b6b'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa23b2f8af0>], external_ids={}, name=bf4b4a34-237c-4fe2-88ca-4e5346644b6b, nb_cfg_timestamp=1769839432213, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.260 155105 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa23b2fbb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.260 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.261 155105 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.261 155105 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.261 155105 INFO oslo_service.service [-] Starting 1 workers
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.264 155105 DEBUG oslo_service.service [-] Started child 155362 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.266 155105 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpg9q6zjfk/privsep.sock']
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.267 155362 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-494490'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.289 155362 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.290 155362 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.290 155362 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.293 155362 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.299 155362 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.305 155362 INFO eventlet.wsgi.server [-] (155362) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 06:04:50 compute-0 sudo[155492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvdfkrtpdbdmtfziaixxbwjrxrchkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839490.3110306-484-222627475949755/AnsiballZ_stat.py'
Jan 31 06:04:50 compute-0 sudo[155492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:50 compute-0 python3.9[155494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:04:50 compute-0 sudo[155492]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:50 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 06:04:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:04:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5525 writes, 24K keys, 5525 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5525 writes, 786 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5525 writes, 24K keys, 5525 commit groups, 1.0 writes per commit group, ingest: 18.88 MB, 0.03 MB/s
                                           Interval WAL: 5525 writes, 786 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.929 155105 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.929 155105 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpg9q6zjfk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.796 155499 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.799 155499 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.801 155499 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.801 155499 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155499
Jan 31 06:04:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:50.931 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf6247a-552f-4c1a-a7eb-b66460f2e9d9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 06:04:51 compute-0 sudo[155621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywesqxyfbezjmyknunomqooijwebgdes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839490.3110306-484-222627475949755/AnsiballZ_copy.py'
Jan 31 06:04:51 compute-0 sudo[155621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:04:51 compute-0 python3.9[155623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839490.3110306-484-222627475949755/.source.yaml _original_basename=.yo6jm7m0 follow=False checksum=3fabb16507294fac122d794a33c082f814e19832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:04:51 compute-0 sudo[155621]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.416 155499 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.416 155499 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.416 155499 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:04:51 compute-0 sshd-session[146754]: Connection closed by 192.168.122.30 port 36498
Jan 31 06:04:51 compute-0 sshd-session[146751]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:04:51 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 31 06:04:51 compute-0 systemd[1]: session-47.scope: Consumed 47.196s CPU time.
Jan 31 06:04:51 compute-0 systemd-logind[797]: Session 47 logged out. Waiting for processes to exit.
Jan 31 06:04:51 compute-0 systemd-logind[797]: Removed session 47.
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.908 155499 DEBUG oslo.privsep.daemon [-] privsep: reply[8f71f248-70e0-4524-9556-bb319f8e6213]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.910 155105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bf4b4a34-237c-4fe2-88ca-4e5346644b6b, column=external_ids, values=({'neutron:ovn-metadata-id': '9f7d9d4a-d8f5-55ea-8cc3-b11c5935cb50'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 06:04:51 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:51.959 155105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bf4b4a34-237c-4fe2-88ca-4e5346644b6b, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.080 155105 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.081 155105 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.082 155105 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.083 155105 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.084 155105 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.085 155105 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.086 155105 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.087 155105 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.088 155105 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.089 155105 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.090 155105 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.091 155105 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.092 155105 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.093 155105 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.094 155105 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.095 155105 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.096 155105 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.097 155105 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.098 155105 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.099 155105 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.100 155105 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.101 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.102 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.103 155105 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.104 155105 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.105 155105 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.106 155105 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.107 155105 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.108 155105 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.109 155105 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.110 155105 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.111 155105 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.112 155105 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.113 155105 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.114 155105 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.115 155105 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.116 155105 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.117 155105 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.118 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.119 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.120 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.121 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.122 155105 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.123 155105 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.123 155105 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.123 155105 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:04:52 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:04:52.123 155105 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 06:04:52 compute-0 ceph-mon[75251]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:54 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Check health
Jan 31 06:04:54 compute-0 sudo[155649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:04:54 compute-0 sudo[155649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:54 compute-0 sudo[155649]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:54 compute-0 sudo[155674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 06:04:54 compute-0 sudo[155674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:54 compute-0 ceph-mon[75251]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:54 compute-0 sudo[155674]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:04:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:04:54 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:54 compute-0 sudo[155719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:04:54 compute-0 sudo[155719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:54 compute-0 sudo[155719]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:54 compute-0 sudo[155744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:04:54 compute-0 sudo[155744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:55 compute-0 sudo[155744]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:04:55 compute-0 sudo[155799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:04:55 compute-0 sudo[155799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:55 compute-0 sudo[155799]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:55 compute-0 sudo[155824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:04:55 compute-0 sudo[155824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.508255227 +0000 UTC m=+0.071663205 container create e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 06:04:55 compute-0 systemd[1]: Started libpod-conmon-e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d.scope.
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.457191633 +0000 UTC m=+0.020599651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.621230645 +0000 UTC m=+0.184638653 container init e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.627578461 +0000 UTC m=+0.190986439 container start e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.657992993 +0000 UTC m=+0.221400971 container attach e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:04:55 compute-0 distracted_herschel[155877]: 167 167
Jan 31 06:04:55 compute-0 systemd[1]: libpod-e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d.scope: Deactivated successfully.
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.659852344 +0000 UTC m=+0.223260322 container died e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:04:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65bdaccf130b47c6487cf53b54e9ee6b6bee01550dcf7d09557cdf7c8dba298-merged.mount: Deactivated successfully.
Jan 31 06:04:55 compute-0 podman[155861]: 2026-01-31 06:04:55.894316495 +0000 UTC m=+0.457724483 container remove e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 06:04:55 compute-0 systemd[1]: libpod-conmon-e14466c54a2d1effe845430abc2fd1d7dd1a9ae809926c2e4337b68c784adc3d.scope: Deactivated successfully.
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.045285135 +0000 UTC m=+0.039689660 container create c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:04:56 compute-0 systemd[1]: Started libpod-conmon-c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3.scope.
Jan 31 06:04:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.105829071 +0000 UTC m=+0.100233626 container init c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.110254973 +0000 UTC m=+0.104659508 container start c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.113579495 +0000 UTC m=+0.107984050 container attach c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.026218747 +0000 UTC m=+0.020623302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:56 compute-0 sleepy_grothendieck[155916]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:04:56 compute-0 sleepy_grothendieck[155916]: --> All data devices are unavailable
Jan 31 06:04:56 compute-0 systemd[1]: libpod-c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3.scope: Deactivated successfully.
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.546475029 +0000 UTC m=+0.540879554 container died c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 31 06:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0479e24c016484632bb914bc2f6e69c8057d8822f44f2c8cec568ce38680d843-merged.mount: Deactivated successfully.
Jan 31 06:04:56 compute-0 podman[155900]: 2026-01-31 06:04:56.590589541 +0000 UTC m=+0.584994066 container remove c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:04:56 compute-0 systemd[1]: libpod-conmon-c28c2abdb50f3fecf7ee22098f3102607f411958c127ad356edc841e792cf6f3.scope: Deactivated successfully.
Jan 31 06:04:56 compute-0 sudo[155824]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:56 compute-0 sudo[155947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:04:56 compute-0 sudo[155947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:56 compute-0 sudo[155947]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:56 compute-0 sudo[155972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:04:56 compute-0 ceph-mon[75251]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:56 compute-0 sudo[155972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:56 compute-0 podman[156009]: 2026-01-31 06:04:56.943322535 +0000 UTC m=+0.028021137 container create cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 06:04:56 compute-0 systemd[1]: Started libpod-conmon-cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1.scope.
Jan 31 06:04:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:56 compute-0 podman[156009]: 2026-01-31 06:04:56.990646175 +0000 UTC m=+0.075344817 container init cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:04:56 compute-0 podman[156009]: 2026-01-31 06:04:56.994952584 +0000 UTC m=+0.079651186 container start cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:04:56 compute-0 podman[156009]: 2026-01-31 06:04:56.998027689 +0000 UTC m=+0.082726301 container attach cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:04:56 compute-0 bold_gould[156025]: 167 167
Jan 31 06:04:56 compute-0 systemd[1]: libpod-cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1.scope: Deactivated successfully.
Jan 31 06:04:56 compute-0 podman[156009]: 2026-01-31 06:04:56.999763857 +0000 UTC m=+0.084462459 container died cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-65d24e56109f1fd0f1bcd460b0d02e054837c40e9d8cd0945a74515c3612e605-merged.mount: Deactivated successfully.
Jan 31 06:04:57 compute-0 podman[156009]: 2026-01-31 06:04:56.930277283 +0000 UTC m=+0.014975905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:57 compute-0 podman[156009]: 2026-01-31 06:04:57.030034705 +0000 UTC m=+0.114733307 container remove cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 06:04:57 compute-0 systemd[1]: libpod-conmon-cb6acea1251057985271688586376c3dad5a32af139c7cfb6fcfa4c41ec663e1.scope: Deactivated successfully.
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.150310995 +0000 UTC m=+0.044434321 container create 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 06:04:57 compute-0 systemd[1]: Started libpod-conmon-43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb.scope.
Jan 31 06:04:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.124905891 +0000 UTC m=+0.019029207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6097b0eb918a4b895b0e7871b99d1ec1c32fe02cd72fc1ee020ef7575236d8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6097b0eb918a4b895b0e7871b99d1ec1c32fe02cd72fc1ee020ef7575236d8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6097b0eb918a4b895b0e7871b99d1ec1c32fe02cd72fc1ee020ef7575236d8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6097b0eb918a4b895b0e7871b99d1ec1c32fe02cd72fc1ee020ef7575236d8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.233790146 +0000 UTC m=+0.127913482 container init 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.238167587 +0000 UTC m=+0.132290903 container start 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.241336205 +0000 UTC m=+0.135459521 container attach 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:04:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]: {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     "0": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "devices": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "/dev/loop3"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             ],
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_name": "ceph_lv0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_size": "21470642176",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "name": "ceph_lv0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "tags": {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.crush_device_class": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.encrypted": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_id": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.vdo": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.with_tpm": "0"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             },
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "vg_name": "ceph_vg0"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         }
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     ],
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     "1": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "devices": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "/dev/loop4"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             ],
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_name": "ceph_lv1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_size": "21470642176",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "name": "ceph_lv1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "tags": {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.crush_device_class": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.encrypted": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_id": "1",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.vdo": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.with_tpm": "0"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             },
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "vg_name": "ceph_vg1"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         }
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     ],
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     "2": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "devices": [
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "/dev/loop5"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             ],
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_name": "ceph_lv2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_size": "21470642176",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "name": "ceph_lv2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "tags": {
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.crush_device_class": "",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.encrypted": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osd_id": "2",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.vdo": "0",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:                 "ceph.with_tpm": "0"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             },
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "type": "block",
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:             "vg_name": "ceph_vg2"
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:         }
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]:     ]
Jan 31 06:04:57 compute-0 cranky_torvalds[156064]: }
Jan 31 06:04:57 compute-0 systemd[1]: libpod-43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb.scope: Deactivated successfully.
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.493414973 +0000 UTC m=+0.387538289 container died 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6097b0eb918a4b895b0e7871b99d1ec1c32fe02cd72fc1ee020ef7575236d8ca-merged.mount: Deactivated successfully.
Jan 31 06:04:57 compute-0 podman[156048]: 2026-01-31 06:04:57.531450076 +0000 UTC m=+0.425573392 container remove 43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 06:04:57 compute-0 systemd[1]: libpod-conmon-43eacc65dedcb3dc81010445353b0ff738aac8901a62e85789ee3e35d513beeb.scope: Deactivated successfully.
Jan 31 06:04:57 compute-0 sudo[155972]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:57 compute-0 sudo[156085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:04:57 compute-0 sudo[156085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:57 compute-0 sudo[156085]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:57 compute-0 sudo[156110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:04:57 compute-0 sudo[156110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:57 compute-0 podman[156147]: 2026-01-31 06:04:57.962077727 +0000 UTC m=+0.098391885 container create 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:04:57 compute-0 podman[156147]: 2026-01-31 06:04:57.882434712 +0000 UTC m=+0.018748870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:58 compute-0 systemd[1]: Started libpod-conmon-8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180.scope.
Jan 31 06:04:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:58 compute-0 podman[156147]: 2026-01-31 06:04:58.076865315 +0000 UTC m=+0.213179523 container init 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 06:04:58 compute-0 podman[156147]: 2026-01-31 06:04:58.081218195 +0000 UTC m=+0.217532353 container start 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:04:58 compute-0 objective_jang[156164]: 167 167
Jan 31 06:04:58 compute-0 systemd[1]: libpod-8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180.scope: Deactivated successfully.
Jan 31 06:04:58 compute-0 podman[156147]: 2026-01-31 06:04:58.188373641 +0000 UTC m=+0.324687799 container attach 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:04:58 compute-0 podman[156147]: 2026-01-31 06:04:58.190651184 +0000 UTC m=+0.326965372 container died 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e934a5bfa62c38c4c83974097a7adf408e5029f0a50cbe388f0ef99bbfd6c79-merged.mount: Deactivated successfully.
Jan 31 06:04:58 compute-0 podman[156147]: 2026-01-31 06:04:58.278745143 +0000 UTC m=+0.415059301 container remove 8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 06:04:58 compute-0 systemd[1]: libpod-conmon-8ced2879ae310b65c814c7dcd02939b3910c8ed050a5990ef943a7ed086e3180.scope: Deactivated successfully.
Jan 31 06:04:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:04:58 compute-0 podman[156190]: 2026-01-31 06:04:58.423881491 +0000 UTC m=+0.043243758 container create be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:04:58 compute-0 sshd-session[156183]: Accepted publickey for zuul from 192.168.122.30 port 34644 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:04:58 compute-0 systemd[1]: Started libpod-conmon-be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85.scope.
Jan 31 06:04:58 compute-0 systemd-logind[797]: New session 48 of user zuul.
Jan 31 06:04:58 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 31 06:04:58 compute-0 sshd-session[156183]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:04:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334a787f2f23b89fdbfdf093b581a1bdc474a368e75f47e6c00a221d8d565b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334a787f2f23b89fdbfdf093b581a1bdc474a368e75f47e6c00a221d8d565b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334a787f2f23b89fdbfdf093b581a1bdc474a368e75f47e6c00a221d8d565b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5334a787f2f23b89fdbfdf093b581a1bdc474a368e75f47e6c00a221d8d565b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:04:58 compute-0 podman[156190]: 2026-01-31 06:04:58.493953311 +0000 UTC m=+0.113315598 container init be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:04:58 compute-0 podman[156190]: 2026-01-31 06:04:58.401477401 +0000 UTC m=+0.020839678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:04:58 compute-0 podman[156190]: 2026-01-31 06:04:58.501208931 +0000 UTC m=+0.120571188 container start be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:04:58 compute-0 podman[156190]: 2026-01-31 06:04:58.545764625 +0000 UTC m=+0.165126882 container attach be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:04:58 compute-0 ceph-mon[75251]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:59 compute-0 lvm[156390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:04:59 compute-0 lvm[156390]: VG ceph_vg0 finished
Jan 31 06:04:59 compute-0 lvm[156407]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:04:59 compute-0 lvm[156407]: VG ceph_vg1 finished
Jan 31 06:04:59 compute-0 lvm[156414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:04:59 compute-0 lvm[156414]: VG ceph_vg2 finished
Jan 31 06:04:59 compute-0 nostalgic_thompson[156208]: {}
Jan 31 06:04:59 compute-0 systemd[1]: libpod-be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85.scope: Deactivated successfully.
Jan 31 06:04:59 compute-0 podman[156190]: 2026-01-31 06:04:59.196800177 +0000 UTC m=+0.816162434 container died be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 06:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5334a787f2f23b89fdbfdf093b581a1bdc474a368e75f47e6c00a221d8d565b1-merged.mount: Deactivated successfully.
Jan 31 06:04:59 compute-0 podman[156190]: 2026-01-31 06:04:59.237430582 +0000 UTC m=+0.856792839 container remove be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_thompson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 06:04:59 compute-0 systemd[1]: libpod-conmon-be8b095ef949230df21808e726a8a46437b5bf8549ccb3ba405befaac3782b85.scope: Deactivated successfully.
Jan 31 06:04:59 compute-0 sudo[156110]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:04:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:04:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:04:59 compute-0 sudo[156454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:04:59 compute-0 sudo[156454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:04:59 compute-0 sudo[156454]: pam_unix(sudo:session): session closed for user root
Jan 31 06:04:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:04:59 compute-0 python3.9[156440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:05:00 compute-0 sudo[156632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpvisrkcpudrbahkbckuzflleiqcekxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839499.829869-29-237154134134344/AnsiballZ_command.py'
Jan 31 06:05:00 compute-0 sudo[156632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:00 compute-0 python3.9[156634]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:05:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:05:00 compute-0 ceph-mon[75251]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:00 compute-0 sudo[156632]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:01 compute-0 sudo[156797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnagshvcxrcxejoqogabacimpqwvvqbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839500.7033672-40-129615581659839/AnsiballZ_systemd_service.py'
Jan 31 06:05:01 compute-0 sudo[156797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:01 compute-0 python3.9[156799]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:05:01 compute-0 systemd[1]: Reloading.
Jan 31 06:05:01 compute-0 systemd-rc-local-generator[156822]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:05:01 compute-0 systemd-sysv-generator[156825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:05:01 compute-0 sudo[156797]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:02 compute-0 ceph-mon[75251]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:02 compute-0 python3.9[156984]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:05:02 compute-0 network[157001]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:05:02 compute-0 network[157002]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:05:02 compute-0 network[157003]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:05:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:04 compute-0 ceph-mon[75251]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:05 compute-0 sudo[157263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpydweaupoesyrucnwsyrhbbcbqmiyoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839505.174555-59-87280995870658/AnsiballZ_systemd_service.py'
Jan 31 06:05:05 compute-0 sudo[157263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:05 compute-0 python3.9[157265]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:05 compute-0 sudo[157263]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:06 compute-0 sudo[157416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psceratrmosyviubjzlvhxvqsmkxuvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839505.8061016-59-139788317964161/AnsiballZ_systemd_service.py'
Jan 31 06:05:06 compute-0 sudo[157416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:06 compute-0 python3.9[157418]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:06 compute-0 sudo[157416]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:06 compute-0 ceph-mon[75251]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:06 compute-0 sudo[157569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbopacthdnbmmnkgalxoxztcvtggkofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839506.536301-59-221981368557877/AnsiballZ_systemd_service.py'
Jan 31 06:05:06 compute-0 sudo[157569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:07 compute-0 python3.9[157571]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:07 compute-0 sudo[157569]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:07 compute-0 sudo[157722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikujxypsbjxwfkudjtyilykzbyaosry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839507.1765008-59-281012724538190/AnsiballZ_systemd_service.py'
Jan 31 06:05:07 compute-0 sudo[157722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:07 compute-0 python3.9[157724]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:07 compute-0 sudo[157722]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:08 compute-0 sudo[157875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xynasdazafrwsoecfddxgwqoqhefmvbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839507.8840148-59-113735046687330/AnsiballZ_systemd_service.py'
Jan 31 06:05:08 compute-0 sudo[157875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:08 compute-0 python3.9[157877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:08 compute-0 sudo[157875]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:08 compute-0 ceph-mon[75251]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:08 compute-0 sudo[158028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqffbscmbbiblylklptacxhcwkplivzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839508.521747-59-244822231626576/AnsiballZ_systemd_service.py'
Jan 31 06:05:08 compute-0 sudo[158028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:09 compute-0 python3.9[158030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:09 compute-0 sudo[158028]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:09 compute-0 sudo[158181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abqsrhvyrwnlmbvctexfjfcalbqjgeuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839509.195382-59-28233012699260/AnsiballZ_systemd_service.py'
Jan 31 06:05:09 compute-0 sudo[158181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:09 compute-0 ceph-mon[75251]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:09 compute-0 python3.9[158183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:05:09 compute-0 sudo[158181]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:10 compute-0 sudo[158334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfonxxagztouarlkkxaxjpjlnacjqjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839510.1116035-111-105001684636425/AnsiballZ_file.py'
Jan 31 06:05:10 compute-0 sudo[158334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:10 compute-0 python3.9[158336]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:10 compute-0 sudo[158334]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:11 compute-0 sudo[158486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhoglndonsbvukjmpckedejndppxrli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839510.8914244-111-263437101453589/AnsiballZ_file.py'
Jan 31 06:05:11 compute-0 sudo[158486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:11 compute-0 python3.9[158488]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:11 compute-0 sudo[158486]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:12 compute-0 sudo[158638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghmeibeirzvjgkllbbiwgutzvhoaukei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839512.0674407-111-111547766346086/AnsiballZ_file.py'
Jan 31 06:05:12 compute-0 sudo[158638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:12 compute-0 ceph-mon[75251]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:12 compute-0 python3.9[158640]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:12 compute-0 sudo[158638]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:12 compute-0 sudo[158790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdnjdcxiabqcxkkspkmanenfawxdrerj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839512.662285-111-214757975384467/AnsiballZ_file.py'
Jan 31 06:05:12 compute-0 sudo[158790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:13 compute-0 python3.9[158792]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:13 compute-0 sudo[158790]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:13 compute-0 sudo[158942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aknzqpzymszwytrqgxgbgafvalhdkzwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839513.2332444-111-252291583800106/AnsiballZ_file.py'
Jan 31 06:05:13 compute-0 sudo[158942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:13 compute-0 python3.9[158944]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:13 compute-0 sudo[158942]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:13 compute-0 sudo[159094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbaqhiutkxbxrbcwcweltqrswljsquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839513.72934-111-129051964187031/AnsiballZ_file.py'
Jan 31 06:05:13 compute-0 sudo[159094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:14 compute-0 python3.9[159096]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:14 compute-0 sudo[159094]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:14 compute-0 sudo[159246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnucvwiesodyzxugcneowjmcuafvbcao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839514.2605658-111-165177531310264/AnsiballZ_file.py'
Jan 31 06:05:14 compute-0 sudo[159246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:14 compute-0 ceph-mon[75251]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:14 compute-0 python3.9[159248]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:14 compute-0 sudo[159246]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:15 compute-0 sudo[159409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agfbrxtadaatwbhosbarvkcgcoggwpqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839514.8144312-161-123553658491944/AnsiballZ_file.py'
Jan 31 06:05:15 compute-0 sudo[159409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:15 compute-0 podman[159372]: 2026-01-31 06:05:15.091754857 +0000 UTC m=+0.064540868 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 06:05:15 compute-0 python3.9[159415]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:15 compute-0 sudo[159409]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:15 compute-0 sudo[159576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwslxufqpvzwbdmycnngbcdswzfuszfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839515.373677-161-256544011660892/AnsiballZ_file.py'
Jan 31 06:05:15 compute-0 sudo[159576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:15 compute-0 python3.9[159578]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:15 compute-0 sudo[159576]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:16 compute-0 sudo[159728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqqvusrxvmnsoiaispcrtfrurddwrucs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839515.918857-161-182817316101178/AnsiballZ_file.py'
Jan 31 06:05:16 compute-0 sudo[159728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:16 compute-0 python3.9[159730]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:16 compute-0 sudo[159728]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:16 compute-0 sudo[159880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mumlsznfiwirjwgrwunlrxssmnqwntjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839516.4594865-161-252697489393192/AnsiballZ_file.py'
Jan 31 06:05:16 compute-0 sudo[159880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:16 compute-0 ceph-mon[75251]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:16 compute-0 python3.9[159882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:16 compute-0 sudo[159880]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:17 compute-0 sudo[160032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxudagjyjjqnggmiijslrewduoncipcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839516.9759066-161-146349706011937/AnsiballZ_file.py'
Jan 31 06:05:17 compute-0 sudo[160032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:17 compute-0 python3.9[160034]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:17 compute-0 sudo[160032]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:17 compute-0 sudo[160184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjnpkgkaaiichmghnhcnhnpklmjblgdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839517.4978971-161-62484506243275/AnsiballZ_file.py'
Jan 31 06:05:17 compute-0 sudo[160184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:17 compute-0 ceph-mon[75251]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:17 compute-0 python3.9[160186]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:17 compute-0 sudo[160184]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:18 compute-0 sudo[160336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptjuypizlfvxkqizmqxnmdhptazahvsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839517.999152-161-69227515019626/AnsiballZ_file.py'
Jan 31 06:05:18 compute-0 sudo[160336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:18 compute-0 python3.9[160338]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:05:18 compute-0 sudo[160336]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:18 compute-0 sudo[160503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahvcuwsvdgjeiknpgekurgefmkuyguuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839518.6574101-212-105054476240881/AnsiballZ_command.py'
Jan 31 06:05:18 compute-0 sudo[160503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:18 compute-0 podman[160462]: 2026-01-31 06:05:18.95676747 +0000 UTC m=+0.070068921 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:05:19 compute-0 python3.9[160509]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:19 compute-0 sudo[160503]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:19 compute-0 python3.9[160661]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:05:20 compute-0 sudo[160811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpwnahsrcxejiwpmgaassqgrsfaofar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839520.220345-230-53885528384226/AnsiballZ_systemd_service.py'
Jan 31 06:05:20 compute-0 sudo[160811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:20 compute-0 ceph-mon[75251]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:20 compute-0 python3.9[160813]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:05:20 compute-0 systemd[1]: Reloading.
Jan 31 06:05:20 compute-0 systemd-sysv-generator[160842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:05:20 compute-0 systemd-rc-local-generator[160832]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:05:21 compute-0 sudo[160811]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:21 compute-0 sudo[160998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzrkwifvdivbtqjxbitqraxejtfcbcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839521.3556118-238-72126969366489/AnsiballZ_command.py'
Jan 31 06:05:21 compute-0 sudo[160998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:21 compute-0 python3.9[161000]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:21 compute-0 sudo[160998]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:22 compute-0 sudo[161151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtuzykaatjzeswgfufetmslrezkdbdfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839522.0162928-238-133880384979759/AnsiballZ_command.py'
Jan 31 06:05:22 compute-0 sudo[161151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:22 compute-0 python3.9[161153]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:22 compute-0 sudo[161151]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:22 compute-0 ceph-mon[75251]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:22 compute-0 sudo[161304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pybhqmapempzdmrrndrakfhzdvuedqpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839522.672363-238-265144549113020/AnsiballZ_command.py'
Jan 31 06:05:22 compute-0 sudo[161304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:23 compute-0 python3.9[161306]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:23 compute-0 sudo[161304]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:23 compute-0 sudo[161457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukyvsfresjtlsmsoctxjpwuqjdellxqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839523.2922719-238-70623613003884/AnsiballZ_command.py'
Jan 31 06:05:23 compute-0 sudo[161457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:23 compute-0 python3.9[161459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:23 compute-0 sudo[161457]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:23 compute-0 ceph-mon[75251]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:24 compute-0 sudo[161610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zievpunpcthhqjidkkvxntcbicixkrrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839523.8328025-238-5982520960461/AnsiballZ_command.py'
Jan 31 06:05:24 compute-0 sudo[161610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:24 compute-0 python3.9[161612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:24 compute-0 sudo[161610]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:24 compute-0 sudo[161763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcxiywtemzclplxxiafinuszjkgwezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839524.3274019-238-192454051453992/AnsiballZ_command.py'
Jan 31 06:05:24 compute-0 sudo[161763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:24 compute-0 python3.9[161765]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:24 compute-0 sudo[161763]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:25 compute-0 sudo[161916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskqpjvitbisdpbnapplemukacmzbdos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839524.843765-238-157247082609850/AnsiballZ_command.py'
Jan 31 06:05:25 compute-0 sudo[161916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:25 compute-0 python3.9[161918]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:05:25 compute-0 sudo[161916]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:26 compute-0 sudo[162069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhipyrxjovazywmlabkuxcytdpsvysuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839525.6621115-292-176851990719398/AnsiballZ_getent.py'
Jan 31 06:05:26 compute-0 sudo[162069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:26 compute-0 python3.9[162071]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 06:05:26 compute-0 sudo[162069]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:26 compute-0 ceph-mon[75251]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:26 compute-0 sudo[162222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgtmzskgvpmjjtjvrueniloddthinamk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839526.591924-300-6740741481525/AnsiballZ_group.py'
Jan 31 06:05:26 compute-0 sudo[162222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:27 compute-0 python3.9[162224]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:05:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:27 compute-0 groupadd[162225]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 06:05:27 compute-0 groupadd[162225]: group added to /etc/gshadow: name=libvirt
Jan 31 06:05:27 compute-0 groupadd[162225]: new group: name=libvirt, GID=42473
Jan 31 06:05:27 compute-0 sudo[162222]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:28 compute-0 ceph-mon[75251]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:28 compute-0 sudo[162380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnojsiyigcskdhrjuszbpgiyikipoeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839527.723059-308-45409479130586/AnsiballZ_user.py'
Jan 31 06:05:28 compute-0 sudo[162380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:28 compute-0 python3.9[162382]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:05:28 compute-0 useradd[162384]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 06:05:28 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:05:29 compute-0 sudo[162380]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:29 compute-0 sudo[162541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cknahcmtufmkdidfybivsvswaoeuljzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839529.4497185-319-277934553158111/AnsiballZ_setup.py'
Jan 31 06:05:29 compute-0 sudo[162541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:30 compute-0 python3.9[162543]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:05:30 compute-0 sudo[162541]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:30 compute-0 sudo[162625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktsyzdxpdxlczfbkhxxnrzrllvfaucmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839529.4497185-319-277934553158111/AnsiballZ_dnf.py'
Jan 31 06:05:30 compute-0 sudo[162625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:05:31 compute-0 python3.9[162627]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:05:31 compute-0 ceph-mon[75251]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:32 compute-0 ceph-mon[75251]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:34 compute-0 ceph-mon[75251]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:35 compute-0 ceph-mon[75251]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:38 compute-0 ceph-mon[75251]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:40 compute-0 ceph-mon[75251]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:41 compute-0 ceph-mon[75251]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:05:44
Jan 31 06:05:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:05:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:05:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Jan 31 06:05:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:05:44 compute-0 ceph-mon[75251]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:05:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:05:46 compute-0 podman[162641]: 2026-01-31 06:05:46.194832382 +0000 UTC m=+0.122162964 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 06:05:46 compute-0 ceph-mon[75251]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:48 compute-0 ceph-mon[75251]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:49 compute-0 podman[162670]: 2026-01-31 06:05:49.120341422 +0000 UTC m=+0.050655716 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:05:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:05:50.198 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:05:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:05:50.199 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:05:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:05:50.199 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:05:50 compute-0 ceph-mon[75251]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:52 compute-0 ceph-mon[75251]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:54 compute-0 ceph-mon[75251]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:05:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:05:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:57 compute-0 ceph-mon[75251]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:05:58 compute-0 ceph-mon[75251]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:05:59 compute-0 sudo[162689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:05:59 compute-0 sudo[162689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:05:59 compute-0 sudo[162689]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:59 compute-0 sudo[162714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:05:59 compute-0 sudo[162714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:05:59 compute-0 sudo[162714]: pam_unix(sudo:session): session closed for user root
Jan 31 06:05:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 06:05:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:05:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:05:59 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:05:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:05:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:05:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:06:00 compute-0 ceph-mon[75251]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:06:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:06:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:06:00 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:06:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:06:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:06:00 compute-0 sudo[162771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:06:00 compute-0 sudo[162771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:00 compute-0 sudo[162771]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:00 compute-0 sudo[162796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:06:00 compute-0 sudo[162796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.636817369 +0000 UTC m=+0.034619367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.774172126 +0000 UTC m=+0.171974084 container create 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 06:06:00 compute-0 systemd[1]: Started libpod-conmon-525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978.scope.
Jan 31 06:06:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.866882579 +0000 UTC m=+0.264684587 container init 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.873710513 +0000 UTC m=+0.271512481 container start 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:06:00 compute-0 trusting_swanson[162851]: 167 167
Jan 31 06:06:00 compute-0 systemd[1]: libpod-525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978.scope: Deactivated successfully.
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.947772468 +0000 UTC m=+0.345574476 container attach 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 06:06:00 compute-0 podman[162834]: 2026-01-31 06:06:00.948402267 +0000 UTC m=+0.346204235 container died 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:06:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c46e9e034e7b7e808354b4394ffd9708b66700d1a34b1b7d6f62f3489a08a1-merged.mount: Deactivated successfully.
Jan 31 06:06:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:01 compute-0 podman[162834]: 2026-01-31 06:06:01.545836242 +0000 UTC m=+0.943638240 container remove 525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:06:01 compute-0 systemd[1]: libpod-conmon-525d8c4a31978f8d198cebfd85c1c77c3b8fac7d84c81f2171e2661434d20978.scope: Deactivated successfully.
Jan 31 06:06:01 compute-0 podman[162876]: 2026-01-31 06:06:01.691985933 +0000 UTC m=+0.054337206 container create d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:06:01 compute-0 systemd[1]: Started libpod-conmon-d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09.scope.
Jan 31 06:06:01 compute-0 podman[162876]: 2026-01-31 06:06:01.667926393 +0000 UTC m=+0.030277746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:01 compute-0 podman[162876]: 2026-01-31 06:06:01.784802279 +0000 UTC m=+0.147153582 container init d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:06:01 compute-0 podman[162876]: 2026-01-31 06:06:01.791031375 +0000 UTC m=+0.153382638 container start d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:06:01 compute-0 podman[162876]: 2026-01-31 06:06:01.803102446 +0000 UTC m=+0.165453739 container attach d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 06:06:02 compute-0 hardcore_herschel[162892]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:06:02 compute-0 hardcore_herschel[162892]: --> All data devices are unavailable
Jan 31 06:06:02 compute-0 systemd[1]: libpod-d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09.scope: Deactivated successfully.
Jan 31 06:06:02 compute-0 podman[162876]: 2026-01-31 06:06:02.178038439 +0000 UTC m=+0.540389702 container died d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:06:02 compute-0 ceph-mon[75251]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4341de7de1512e6200a61052ce4deb86c16d107adcb23a7e625a6ce6c203c8fa-merged.mount: Deactivated successfully.
Jan 31 06:06:02 compute-0 podman[162876]: 2026-01-31 06:06:02.811886725 +0000 UTC m=+1.174237998 container remove d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:06:02 compute-0 systemd[1]: libpod-conmon-d5d5828c0c3f00b88f882fc70c7a94055f976ef0669728ccda7c55b512442e09.scope: Deactivated successfully.
Jan 31 06:06:02 compute-0 sudo[162796]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:02 compute-0 sudo[162925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:06:02 compute-0 sudo[162925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:02 compute-0 sudo[162925]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:02 compute-0 sudo[162950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:06:02 compute-0 sudo[162950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.300569299 +0000 UTC m=+0.096976721 container create abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.223438693 +0000 UTC m=+0.019846145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:03 compute-0 systemd[1]: Started libpod-conmon-abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00.scope.
Jan 31 06:06:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:06:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.427188156 +0000 UTC m=+0.223595608 container init abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.431989669 +0000 UTC m=+0.228397091 container start abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:06:03 compute-0 confident_napier[163004]: 167 167
Jan 31 06:06:03 compute-0 systemd[1]: libpod-abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00.scope: Deactivated successfully.
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.4540732 +0000 UTC m=+0.250480622 container attach abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.4544042 +0000 UTC m=+0.250811612 container died abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:06:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-37ec87a46a88a07ffdc01d86992d5ce404312a1da06e353accda47e70c81d2a5-merged.mount: Deactivated successfully.
Jan 31 06:06:03 compute-0 podman[162988]: 2026-01-31 06:06:03.600911981 +0000 UTC m=+0.397319403 container remove abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:06:03 compute-0 systemd[1]: libpod-conmon-abca79abdcd6877cf7f4af040d3cad122eead2e03416d4275c03bf61cd016f00.scope: Deactivated successfully.
Jan 31 06:06:03 compute-0 podman[163040]: 2026-01-31 06:06:03.751161845 +0000 UTC m=+0.049648616 container create 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:06:03 compute-0 systemd[1]: Started libpod-conmon-94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920.scope.
Jan 31 06:06:03 compute-0 podman[163040]: 2026-01-31 06:06:03.724048684 +0000 UTC m=+0.022535475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d346245a319e219fbd2e3e6c3dceac833a4c2572e8daa6bafc2961c87a04de29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d346245a319e219fbd2e3e6c3dceac833a4c2572e8daa6bafc2961c87a04de29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d346245a319e219fbd2e3e6c3dceac833a4c2572e8daa6bafc2961c87a04de29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d346245a319e219fbd2e3e6c3dceac833a4c2572e8daa6bafc2961c87a04de29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:03 compute-0 podman[163040]: 2026-01-31 06:06:03.858023431 +0000 UTC m=+0.156510302 container init 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:06:03 compute-0 podman[163040]: 2026-01-31 06:06:03.865883856 +0000 UTC m=+0.164370677 container start 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:06:03 compute-0 podman[163040]: 2026-01-31 06:06:03.882933895 +0000 UTC m=+0.181420666 container attach 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 06:06:04 compute-0 lucid_swanson[163064]: {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     "0": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "devices": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "/dev/loop3"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             ],
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_name": "ceph_lv0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_size": "21470642176",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "name": "ceph_lv0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "tags": {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.crush_device_class": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.encrypted": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_id": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.vdo": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.with_tpm": "0"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             },
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "vg_name": "ceph_vg0"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         }
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     ],
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     "1": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "devices": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "/dev/loop4"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             ],
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_name": "ceph_lv1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_size": "21470642176",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "name": "ceph_lv1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "tags": {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.crush_device_class": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.encrypted": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_id": "1",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.vdo": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.with_tpm": "0"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             },
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "vg_name": "ceph_vg1"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         }
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     ],
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     "2": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "devices": [
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "/dev/loop5"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             ],
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_name": "ceph_lv2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_size": "21470642176",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "name": "ceph_lv2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "tags": {
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.cluster_name": "ceph",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.crush_device_class": "",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.encrypted": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.objectstore": "bluestore",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osd_id": "2",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.vdo": "0",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:                 "ceph.with_tpm": "0"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             },
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "type": "block",
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:             "vg_name": "ceph_vg2"
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:         }
Jan 31 06:06:04 compute-0 lucid_swanson[163064]:     ]
Jan 31 06:06:04 compute-0 lucid_swanson[163064]: }
Jan 31 06:06:04 compute-0 systemd[1]: libpod-94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920.scope: Deactivated successfully.
Jan 31 06:06:04 compute-0 podman[163040]: 2026-01-31 06:06:04.148785746 +0000 UTC m=+0.447272527 container died 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d346245a319e219fbd2e3e6c3dceac833a4c2572e8daa6bafc2961c87a04de29-merged.mount: Deactivated successfully.
Jan 31 06:06:04 compute-0 podman[163040]: 2026-01-31 06:06:04.226312474 +0000 UTC m=+0.524799245 container remove 94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_swanson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:06:04 compute-0 systemd[1]: libpod-conmon-94c96363f668c1b5f462c748478b70b96dc6a684e4716fbe5845bca80e2cb920.scope: Deactivated successfully.
Jan 31 06:06:04 compute-0 sudo[162950]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:04 compute-0 sudo[163106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:06:04 compute-0 sudo[163106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:04 compute-0 sudo[163106]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:04 compute-0 sudo[163133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:06:04 compute-0 sudo[163133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:04 compute-0 ceph-mon[75251]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:06:04 compute-0 podman[163183]: 2026-01-31 06:06:04.618913516 +0000 UTC m=+0.068100678 container create 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:06:04 compute-0 podman[163183]: 2026-01-31 06:06:04.578771585 +0000 UTC m=+0.027958757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:05 compute-0 systemd[1]: Started libpod-conmon-4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba.scope.
Jan 31 06:06:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:05 compute-0 podman[163183]: 2026-01-31 06:06:05.343406531 +0000 UTC m=+0.792593673 container init 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:06:05 compute-0 podman[163183]: 2026-01-31 06:06:05.348240036 +0000 UTC m=+0.797427158 container start 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:06:05 compute-0 gracious_cerf[163207]: 167 167
Jan 31 06:06:05 compute-0 systemd[1]: libpod-4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba.scope: Deactivated successfully.
Jan 31 06:06:05 compute-0 podman[163183]: 2026-01-31 06:06:05.358841263 +0000 UTC m=+0.808028405 container attach 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:06:05 compute-0 podman[163183]: 2026-01-31 06:06:05.359478112 +0000 UTC m=+0.808665254 container died 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 06:06:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 06:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c278e5d95196de8e2b769ac119f7bcac01c243d047876a4dbb03513f47a9b457-merged.mount: Deactivated successfully.
Jan 31 06:06:05 compute-0 podman[163183]: 2026-01-31 06:06:05.464388499 +0000 UTC m=+0.913575641 container remove 4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:06:05 compute-0 systemd[1]: libpod-conmon-4e467c3783c30a479947b1afea822cb9102800c88f1cc8cbc75b0acb254a36ba.scope: Deactivated successfully.
Jan 31 06:06:05 compute-0 podman[163244]: 2026-01-31 06:06:05.578651326 +0000 UTC m=+0.019983218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:06:05 compute-0 podman[163244]: 2026-01-31 06:06:05.765399441 +0000 UTC m=+0.206731253 container create 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:06:05 compute-0 systemd[1]: Started libpod-conmon-230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4.scope.
Jan 31 06:06:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528af09d19d9f9e916ff72588d1cfaa1c04e444ec6dc03be722bbd00583717af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528af09d19d9f9e916ff72588d1cfaa1c04e444ec6dc03be722bbd00583717af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528af09d19d9f9e916ff72588d1cfaa1c04e444ec6dc03be722bbd00583717af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528af09d19d9f9e916ff72588d1cfaa1c04e444ec6dc03be722bbd00583717af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:06:06 compute-0 podman[163244]: 2026-01-31 06:06:06.090071091 +0000 UTC m=+0.531402953 container init 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 06:06:06 compute-0 podman[163244]: 2026-01-31 06:06:06.096692099 +0000 UTC m=+0.538023911 container start 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 06:06:06 compute-0 podman[163244]: 2026-01-31 06:06:06.254423056 +0000 UTC m=+0.695754868 container attach 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:06:06 compute-0 lvm[163390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:06:06 compute-0 lvm[163390]: VG ceph_vg0 finished
Jan 31 06:06:06 compute-0 lvm[163392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:06:06 compute-0 lvm[163392]: VG ceph_vg1 finished
Jan 31 06:06:06 compute-0 lvm[163395]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:06:06 compute-0 lvm[163395]: VG ceph_vg2 finished
Jan 31 06:06:06 compute-0 ceph-mon[75251]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 06:06:07 compute-0 nifty_chebyshev[163282]: {}
Jan 31 06:06:07 compute-0 systemd[1]: libpod-230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4.scope: Deactivated successfully.
Jan 31 06:06:07 compute-0 podman[163244]: 2026-01-31 06:06:07.076929764 +0000 UTC m=+1.518261586 container died 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:06:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 31 06:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-528af09d19d9f9e916ff72588d1cfaa1c04e444ec6dc03be722bbd00583717af-merged.mount: Deactivated successfully.
Jan 31 06:06:07 compute-0 podman[163244]: 2026-01-31 06:06:07.696165562 +0000 UTC m=+2.137497384 container remove 230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_chebyshev, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:06:07 compute-0 sudo[163133]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:06:07 compute-0 systemd[1]: libpod-conmon-230aa17a8b6feeeb83eee0261e8308413713103776f29aee1c4ecb788a3a26e4.scope: Deactivated successfully.
Jan 31 06:06:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:06:07 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:08 compute-0 sudo[163458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:06:08 compute-0 sudo[163458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:06:08 compute-0 sudo[163458]: pam_unix(sudo:session): session closed for user root
Jan 31 06:06:08 compute-0 ceph-mon[75251]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 31 06:06:08 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:06:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:10 compute-0 ceph-mon[75251]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:11 compute-0 ceph-mon[75251]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:06:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:14 compute-0 ceph-mon[75251]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:16 compute-0 ceph-mon[75251]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 31 06:06:17 compute-0 podman[163484]: 2026-01-31 06:06:17.272892469 +0000 UTC m=+0.199643222 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 06:06:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 31 06:06:18 compute-0 ceph-mon[75251]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 31 06:06:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 06:06:20 compute-0 podman[163511]: 2026-01-31 06:06:20.140270129 +0000 UTC m=+0.055480951 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 06:06:20 compute-0 ceph-mon[75251]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 06:06:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 06:06:22 compute-0 ceph-mon[75251]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 06:06:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 06:06:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:25 compute-0 ceph-mon[75251]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 06:06:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:27 compute-0 ceph-mon[75251]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:31 compute-0 ceph-mon[75251]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:32 compute-0 ceph-mon[75251]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:32 compute-0 ceph-mon[75251]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:34 compute-0 ceph-mon[75251]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:37 compute-0 ceph-mon[75251]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:39 compute-0 ceph-mon[75251]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:39 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:06:39 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:06:40 compute-0 ceph-mon[75251]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:41 compute-0 ceph-mon[75251]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:06:44
Jan 31 06:06:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:06:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:06:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Jan 31 06:06:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:06:44 compute-0 ceph-mon[75251]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:06:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:46 compute-0 ceph-mon[75251]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:48 compute-0 ceph-mon[75251]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:48 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 06:06:48 compute-0 podman[163545]: 2026-01-31 06:06:48.1898891 +0000 UTC m=+0.096094714 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:06:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:06:50.199 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:06:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:06:50.200 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:06:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:06:50.200 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:06:50 compute-0 ceph-mon[75251]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:51 compute-0 podman[163577]: 2026-01-31 06:06:51.116990434 +0000 UTC m=+0.039036107 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 06:06:51 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:06:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:06:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:52 compute-0 ceph-mon[75251]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:54 compute-0 ceph-mon[75251]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:06:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:06:55 compute-0 ceph-mon[75251]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:58 compute-0 ceph-mon[75251]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:06:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:06:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:00 compute-0 ceph-mon[75251]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:03 compute-0 ceph-mon[75251]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:04 compute-0 ceph-mon[75251]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:06 compute-0 ceph-mon[75251]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:08 compute-0 sudo[168380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:07:08 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 06:07:08 compute-0 sudo[168380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:08 compute-0 sudo[168380]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:08 compute-0 sudo[168457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 06:07:08 compute-0 sudo[168457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:08 compute-0 podman[168967]: 2026-01-31 06:07:08.563062308 +0000 UTC m=+0.056674713 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:07:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:08 compute-0 podman[168967]: 2026-01-31 06:07:08.66750828 +0000 UTC m=+0.161120655 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 06:07:08 compute-0 ceph-mon[75251]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:09 compute-0 sudo[168457]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:09 compute-0 sudo[169973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:07:09 compute-0 sudo[169973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:09 compute-0 sudo[169973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:09 compute-0 sudo[170041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:07:09 compute-0 sudo[170041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:09 compute-0 sudo[170041]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:07:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:07:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:07:09 compute-0 sudo[170629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:07:09 compute-0 sudo[170629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:09 compute-0 sudo[170629]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:09 compute-0 sudo[170695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:07:09 compute-0 sudo[170695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.103872356 +0000 UTC m=+0.038086970 container create cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:07:10 compute-0 systemd[1]: Started libpod-conmon-cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e.scope.
Jan 31 06:07:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.084379739 +0000 UTC m=+0.018594353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.192177535 +0000 UTC m=+0.126392159 container init cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.197591307 +0000 UTC m=+0.131805911 container start cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.20195903 +0000 UTC m=+0.136173654 container attach cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 06:07:10 compute-0 wonderful_wiles[171172]: 167 167
Jan 31 06:07:10 compute-0 systemd[1]: libpod-cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e.scope: Deactivated successfully.
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.20268265 +0000 UTC m=+0.136897254 container died cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-761dc03d11f3ea3b6d79dc8a4ce887712c1e1abd3e376ea9a65fad09530ed126-merged.mount: Deactivated successfully.
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:10 compute-0 ceph-mon[75251]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:07:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:07:10 compute-0 podman[171057]: 2026-01-31 06:07:10.250329268 +0000 UTC m=+0.184543882 container remove cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wiles, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 06:07:10 compute-0 systemd[1]: libpod-conmon-cde9c4bf6f34ed037e430a5db2cec312b4cd7f30de09cc78a55abfae6077ed2e.scope: Deactivated successfully.
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.355058568 +0000 UTC m=+0.032931616 container create a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 06:07:10 compute-0 systemd[1]: Started libpod-conmon-a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5.scope.
Jan 31 06:07:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.339053749 +0000 UTC m=+0.016926787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.437890354 +0000 UTC m=+0.115763402 container init a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.444463968 +0000 UTC m=+0.122337016 container start a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.451076714 +0000 UTC m=+0.128949762 container attach a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:07:10 compute-0 modest_sutherland[171484]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:07:10 compute-0 modest_sutherland[171484]: --> All data devices are unavailable
Jan 31 06:07:10 compute-0 systemd[1]: libpod-a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5.scope: Deactivated successfully.
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.854458049 +0000 UTC m=+0.532331117 container died a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 06:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-78aa5bee16df112898268d2d6c9483a357fc7e5f55466e6951994e8c9816a588-merged.mount: Deactivated successfully.
Jan 31 06:07:10 compute-0 podman[171375]: 2026-01-31 06:07:10.91076658 +0000 UTC m=+0.588639628 container remove a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sutherland, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:07:10 compute-0 systemd[1]: libpod-conmon-a4c88ab1811ab9ffb8eb079d6156e94160b5293905041114b4998385c17563e5.scope: Deactivated successfully.
Jan 31 06:07:10 compute-0 sudo[170695]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:10 compute-0 sudo[172071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:07:10 compute-0 sudo[172071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:10 compute-0 sudo[172071]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:11 compute-0 sudo[172144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:07:11 compute-0 sudo[172144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.25877552 +0000 UTC m=+0.018667155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.3919643 +0000 UTC m=+0.151855895 container create d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:07:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:11 compute-0 systemd[1]: Started libpod-conmon-d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee.scope.
Jan 31 06:07:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.639011146 +0000 UTC m=+0.398902791 container init d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.64594009 +0000 UTC m=+0.405831675 container start d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:07:11 compute-0 pensive_visvesvaraya[172808]: 167 167
Jan 31 06:07:11 compute-0 systemd[1]: libpod-d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee.scope: Deactivated successfully.
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.790765546 +0000 UTC m=+0.550657131 container attach d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.791371373 +0000 UTC m=+0.551262958 container died d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 06:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b429bb9e8ff64b05f9561db9218e3adc7a5c8c0453bcc390d46cb23a1269285b-merged.mount: Deactivated successfully.
Jan 31 06:07:11 compute-0 podman[172439]: 2026-01-31 06:07:11.873027886 +0000 UTC m=+0.632919471 container remove d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_visvesvaraya, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:07:11 compute-0 systemd[1]: libpod-conmon-d958ef360ba1cc1781b5bb31ccd2306a0d6335d94a8dcca0ba8f55c4c211ceee.scope: Deactivated successfully.
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.006902204 +0000 UTC m=+0.043039919 container create 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:07:12 compute-0 systemd[1]: Started libpod-conmon-1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6.scope.
Jan 31 06:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1421cbe30f53b54fea9f5dda9ff641e1baef3f4707d542973432e01622d0c0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1421cbe30f53b54fea9f5dda9ff641e1baef3f4707d542973432e01622d0c0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1421cbe30f53b54fea9f5dda9ff641e1baef3f4707d542973432e01622d0c0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1421cbe30f53b54fea9f5dda9ff641e1baef3f4707d542973432e01622d0c0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.082637611 +0000 UTC m=+0.118775326 container init 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:11.988010524 +0000 UTC m=+0.024148249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.088956878 +0000 UTC m=+0.125094583 container start 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.104832644 +0000 UTC m=+0.140970349 container attach 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]: {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     "0": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "devices": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "/dev/loop3"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             ],
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_name": "ceph_lv0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_size": "21470642176",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "name": "ceph_lv0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "tags": {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_name": "ceph",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.crush_device_class": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.encrypted": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.objectstore": "bluestore",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_id": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.vdo": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.with_tpm": "0"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             },
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "vg_name": "ceph_vg0"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         }
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     ],
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     "1": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "devices": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "/dev/loop4"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             ],
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_name": "ceph_lv1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_size": "21470642176",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "name": "ceph_lv1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "tags": {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_name": "ceph",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.crush_device_class": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.encrypted": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.objectstore": "bluestore",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_id": "1",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.vdo": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.with_tpm": "0"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             },
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "vg_name": "ceph_vg1"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         }
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     ],
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     "2": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "devices": [
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "/dev/loop5"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             ],
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_name": "ceph_lv2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_size": "21470642176",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "name": "ceph_lv2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "tags": {
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.cluster_name": "ceph",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.crush_device_class": "",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.encrypted": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.objectstore": "bluestore",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osd_id": "2",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.vdo": "0",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:                 "ceph.with_tpm": "0"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             },
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "type": "block",
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:             "vg_name": "ceph_vg2"
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:         }
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]:     ]
Jan 31 06:07:12 compute-0 intelligent_roentgen[173326]: }
Jan 31 06:07:12 compute-0 systemd[1]: libpod-1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6.scope: Deactivated successfully.
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.357662712 +0000 UTC m=+0.393800417 container died 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1421cbe30f53b54fea9f5dda9ff641e1baef3f4707d542973432e01622d0c0ee-merged.mount: Deactivated successfully.
Jan 31 06:07:12 compute-0 podman[173239]: 2026-01-31 06:07:12.438472261 +0000 UTC m=+0.474609956 container remove 1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 06:07:12 compute-0 systemd[1]: libpod-conmon-1725b9751adc4fdfe16a929d8ce2f72da84b620e4dd3f74c5809c2f4ab00b9f6.scope: Deactivated successfully.
Jan 31 06:07:12 compute-0 sudo[172144]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:12 compute-0 sudo[173812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:07:12 compute-0 sudo[173812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:12 compute-0 sudo[173812]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:12 compute-0 ceph-mon[75251]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:12 compute-0 sudo[173881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:07:12 compute-0 sudo[173881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.826638328 +0000 UTC m=+0.036349182 container create 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:07:12 compute-0 systemd[1]: Started libpod-conmon-5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca.scope.
Jan 31 06:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.892512127 +0000 UTC m=+0.102223021 container init 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.897781845 +0000 UTC m=+0.107492719 container start 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 06:07:12 compute-0 hardcore_almeida[174262]: 167 167
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.901038517 +0000 UTC m=+0.110749411 container attach 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:07:12 compute-0 systemd[1]: libpod-5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca.scope: Deactivated successfully.
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.901405087 +0000 UTC m=+0.111115951 container died 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.808074347 +0000 UTC m=+0.017785261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-395f8c691af7c6c454d17222b2b731d6c76f12ab87b61fbe4c1a8984095ef1c7-merged.mount: Deactivated successfully.
Jan 31 06:07:12 compute-0 podman[174171]: 2026-01-31 06:07:12.93250396 +0000 UTC m=+0.142214814 container remove 5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:07:12 compute-0 systemd[1]: libpod-conmon-5593f12de7de0137328225108084a274012209075d0c2bedf6bee3e86ea828ca.scope: Deactivated successfully.
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.055194525 +0000 UTC m=+0.035005024 container create 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:07:13 compute-0 systemd[1]: Started libpod-conmon-0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e.scope.
Jan 31 06:07:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424983580ed7ea0b155d0d9792e7a5bbf012a69532275f10cd14cb0ac9a757cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424983580ed7ea0b155d0d9792e7a5bbf012a69532275f10cd14cb0ac9a757cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424983580ed7ea0b155d0d9792e7a5bbf012a69532275f10cd14cb0ac9a757cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424983580ed7ea0b155d0d9792e7a5bbf012a69532275f10cd14cb0ac9a757cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.129398458 +0000 UTC m=+0.109208987 container init 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.038237709 +0000 UTC m=+0.018048248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.13589044 +0000 UTC m=+0.115700949 container start 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.144678867 +0000 UTC m=+0.124489376 container attach 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 06:07:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:13 compute-0 lvm[175262]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:07:13 compute-0 lvm[175262]: VG ceph_vg0 finished
Jan 31 06:07:13 compute-0 lvm[175272]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:07:13 compute-0 lvm[175272]: VG ceph_vg1 finished
Jan 31 06:07:13 compute-0 lvm[175301]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:07:13 compute-0 lvm[175301]: VG ceph_vg2 finished
Jan 31 06:07:13 compute-0 elastic_swanson[174545]: {}
Jan 31 06:07:13 compute-0 podman[174444]: 2026-01-31 06:07:13.901236548 +0000 UTC m=+0.881047077 container died 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:07:13 compute-0 systemd[1]: libpod-0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e.scope: Deactivated successfully.
Jan 31 06:07:13 compute-0 systemd[1]: libpod-0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e.scope: Consumed 1.016s CPU time.
Jan 31 06:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-424983580ed7ea0b155d0d9792e7a5bbf012a69532275f10cd14cb0ac9a757cc-merged.mount: Deactivated successfully.
Jan 31 06:07:14 compute-0 podman[174444]: 2026-01-31 06:07:14.220850842 +0000 UTC m=+1.200661351 container remove 0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_swanson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:07:14 compute-0 systemd[1]: libpod-conmon-0d9922540a5251dfa6ec33bc382e46550cc5d8de5a76b06eaeedc0b3233ded5e.scope: Deactivated successfully.
Jan 31 06:07:14 compute-0 sudo[173881]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:07:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:07:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:14 compute-0 sudo[175875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:07:14 compute-0 sudo[175875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:07:14 compute-0 sudo[175875]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:14 compute-0 ceph-mon[75251]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:16 compute-0 ceph-mon[75251]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:18 compute-0 ceph-mon[75251]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:19 compute-0 podman[181223]: 2026-01-31 06:07:19.168792457 +0000 UTC m=+0.094520315 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:07:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:19 compute-0 ceph-mon[75251]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:21 compute-0 podman[181378]: 2026-01-31 06:07:21.910209273 +0000 UTC m=+0.093827126 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:07:22 compute-0 ceph-mon[75251]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:24 compute-0 ceph-mon[75251]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:27 compute-0 ceph-mon[75251]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:28 compute-0 ceph-mon[75251]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:30 compute-0 ceph-mon[75251]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:31 compute-0 ceph-mon[75251]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:32 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:07:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:07:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:34 compute-0 ceph-mon[75251]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:36 compute-0 ceph-mon[75251]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:36 compute-0 groupadd[181438]: group added to /etc/group: name=dnsmasq, GID=992
Jan 31 06:07:36 compute-0 groupadd[181438]: group added to /etc/gshadow: name=dnsmasq
Jan 31 06:07:36 compute-0 groupadd[181438]: new group: name=dnsmasq, GID=992
Jan 31 06:07:36 compute-0 useradd[181445]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 06:07:36 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 06:07:36 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 06:07:36 compute-0 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Jan 31 06:07:37 compute-0 groupadd[181458]: group added to /etc/group: name=clevis, GID=991
Jan 31 06:07:37 compute-0 groupadd[181458]: group added to /etc/gshadow: name=clevis
Jan 31 06:07:37 compute-0 groupadd[181458]: new group: name=clevis, GID=991
Jan 31 06:07:37 compute-0 useradd[181465]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 06:07:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:37 compute-0 usermod[181475]: add 'clevis' to group 'tss'
Jan 31 06:07:37 compute-0 usermod[181475]: add 'clevis' to shadow group 'tss'
Jan 31 06:07:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:38 compute-0 ceph-mon[75251]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:40 compute-0 polkitd[43528]: Reloading rules
Jan 31 06:07:40 compute-0 polkitd[43528]: Collecting garbage unconditionally...
Jan 31 06:07:40 compute-0 polkitd[43528]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 06:07:40 compute-0 polkitd[43528]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 06:07:40 compute-0 polkitd[43528]: Finished loading, compiling and executing 3 rules
Jan 31 06:07:40 compute-0 polkitd[43528]: Reloading rules
Jan 31 06:07:40 compute-0 polkitd[43528]: Collecting garbage unconditionally...
Jan 31 06:07:40 compute-0 polkitd[43528]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 06:07:40 compute-0 polkitd[43528]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 06:07:40 compute-0 polkitd[43528]: Finished loading, compiling and executing 3 rules
Jan 31 06:07:41 compute-0 ceph-mon[75251]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:42 compute-0 ceph-mon[75251]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:44 compute-0 ceph-mon[75251]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:07:44
Jan 31 06:07:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:07:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:07:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 06:07:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:07:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:46 compute-0 ceph-mon[75251]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:48 compute-0 ceph-mon[75251]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:50 compute-0 ceph-mon[75251]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:07:50.201 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:07:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:07:50.201 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:07:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:07:50.201 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:07:50 compute-0 podman[182257]: 2026-01-31 06:07:50.216795265 +0000 UTC m=+0.134881997 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:07:50 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 06:07:50 compute-0 sshd[1005]: Received signal 15; terminating.
Jan 31 06:07:50 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 06:07:50 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 06:07:50 compute-0 systemd[1]: sshd.service: Consumed 2.223s CPU time, read 32.0K from disk, written 8.0K to disk.
Jan 31 06:07:50 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 06:07:50 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 31 06:07:50 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:07:50 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:07:50 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:07:50 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 31 06:07:50 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 31 06:07:50 compute-0 sshd[182308]: Server listening on 0.0.0.0 port 22.
Jan 31 06:07:50 compute-0 sshd[182308]: Server listening on :: port 22.
Jan 31 06:07:50 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 31 06:07:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:52 compute-0 podman[182502]: 2026-01-31 06:07:52.141738478 +0000 UTC m=+0.065391177 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 06:07:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:07:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:07:52 compute-0 systemd[1]: Reloading.
Jan 31 06:07:52 compute-0 systemd-rc-local-generator[182588]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:07:52 compute-0 systemd-sysv-generator[182591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:07:52 compute-0 ceph-mon[75251]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:07:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:07:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:07:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:57 compute-0 ceph-mon[75251]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.678677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677678755, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3582678, "memory_usage": 3645488, "flush_reason": "Manual Compaction"}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677697761, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3506375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9852, "largest_seqno": 11894, "table_properties": {"data_size": 3497068, "index_size": 5929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17812, "raw_average_key_size": 19, "raw_value_size": 3478659, "raw_average_value_size": 3797, "num_data_blocks": 269, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769839438, "oldest_key_time": 1769839438, "file_creation_time": 1769839677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19121 microseconds, and 7306 cpu microseconds.
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.697810) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3506375 bytes OK
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.697829) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.699508) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.699543) EVENT_LOG_v1 {"time_micros": 1769839677699536, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.699564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3574150, prev total WAL file size 3574150, number of live WAL files 2.
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.700432) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3424KB)], [26(6404KB)]
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677700493, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10064441, "oldest_snapshot_seqno": -1}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3774 keys, 8298190 bytes, temperature: kUnknown
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677745508, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8298190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8268857, "index_size": 18791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 90631, "raw_average_key_size": 24, "raw_value_size": 8196688, "raw_average_value_size": 2171, "num_data_blocks": 812, "num_entries": 3774, "num_filter_entries": 3774, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769839677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.745760) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8298190 bytes
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.747739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 184.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4288, records dropped: 514 output_compression: NoCompression
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.747768) EVENT_LOG_v1 {"time_micros": 1769839677747753, "job": 10, "event": "compaction_finished", "compaction_time_micros": 45087, "compaction_time_cpu_micros": 16263, "output_level": 6, "num_output_files": 1, "total_output_size": 8298190, "num_input_records": 4288, "num_output_records": 3774, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677748303, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839677749368, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.700301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.749416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.749421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.749423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.749424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:57 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:07:57.749426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:07:58 compute-0 sudo[162625]: pam_unix(sudo:session): session closed for user root
Jan 31 06:07:58 compute-0 ceph-mon[75251]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:58 compute-0 ceph-mon[75251]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:07:59 compute-0 sudo[184831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyyicloxgeolfduwcjrppponluchytfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839678.577937-331-106085318816760/AnsiballZ_systemd.py'
Jan 31 06:07:59 compute-0 sudo[184831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:07:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:07:59 compute-0 python3.9[184854]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:07:59 compute-0 systemd[1]: Reloading.
Jan 31 06:07:59 compute-0 systemd-sysv-generator[185585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:07:59 compute-0 systemd-rc-local-generator[185582]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:07:59 compute-0 sudo[184831]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:00 compute-0 sudo[186634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ochssrzrawszphuvrwlxydvsmybheiiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839679.9379067-331-258204163862794/AnsiballZ_systemd.py'
Jan 31 06:08:00 compute-0 sudo[186634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:00 compute-0 python3.9[186656]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:08:00 compute-0 systemd[1]: Reloading.
Jan 31 06:08:00 compute-0 systemd-sysv-generator[187400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:00 compute-0 systemd-rc-local-generator[187397]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:00 compute-0 ceph-mon[75251]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:00 compute-0 sudo[186634]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:01 compute-0 sudo[188528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzssdoqgswijczsdylcncboehgipvpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839680.914464-331-172968046894912/AnsiballZ_systemd.py'
Jan 31 06:08:01 compute-0 sudo[188528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:01 compute-0 python3.9[188582]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:08:01 compute-0 systemd[1]: Reloading.
Jan 31 06:08:01 compute-0 systemd-rc-local-generator[189204]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:01 compute-0 systemd-sysv-generator[189207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:01 compute-0 sudo[188528]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:02 compute-0 sudo[190129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspimffinsnoljjctqisetxjimegzeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839681.8842533-331-268466352561146/AnsiballZ_systemd.py'
Jan 31 06:08:02 compute-0 sudo[190129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:02 compute-0 python3.9[190152]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:08:02 compute-0 systemd[1]: Reloading.
Jan 31 06:08:02 compute-0 systemd-rc-local-generator[190781]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:02 compute-0 systemd-sysv-generator[190786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:02 compute-0 sudo[190129]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:02 compute-0 ceph-mon[75251]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:03 compute-0 sudo[191773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqinqidbzhcddhrhebxeeloiviinoatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839682.8367307-360-102102261789679/AnsiballZ_systemd.py'
Jan 31 06:08:03 compute-0 sudo[191773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:03 compute-0 python3.9[191775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:03 compute-0 systemd[1]: Reloading.
Jan 31 06:08:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:03 compute-0 systemd-rc-local-generator[191917]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:03 compute-0 systemd-sysv-generator[191922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:03 compute-0 sudo[191773]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:03 compute-0 ceph-mon[75251]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:04 compute-0 sudo[192079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frtfemknoapptmqpbzzrarggkdhbsjwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839683.8120153-360-79140401463602/AnsiballZ_systemd.py'
Jan 31 06:08:04 compute-0 sudo[192079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:08:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:08:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.909s CPU time.
Jan 31 06:08:04 compute-0 systemd[1]: run-rf24ba872f3504d829e961854e9c378af.service: Deactivated successfully.
Jan 31 06:08:04 compute-0 python3.9[192081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:04 compute-0 systemd[1]: Reloading.
Jan 31 06:08:04 compute-0 systemd-sysv-generator[192113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:04 compute-0 systemd-rc-local-generator[192110]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:04 compute-0 sudo[192079]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:05 compute-0 sudo[192270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxqlmoyzfuhovnkcogtfdkmxdkmtcqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839685.405292-360-261525679281595/AnsiballZ_systemd.py'
Jan 31 06:08:05 compute-0 sudo[192270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:06 compute-0 python3.9[192272]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:06 compute-0 systemd[1]: Reloading.
Jan 31 06:08:06 compute-0 systemd-sysv-generator[192300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:06 compute-0 systemd-rc-local-generator[192297]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:06 compute-0 sudo[192270]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:06 compute-0 ceph-mon[75251]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:06 compute-0 sudo[192461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlamrnzdsumhnskxyksnemcxbznimmug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839686.5341234-360-156075864150740/AnsiballZ_systemd.py'
Jan 31 06:08:06 compute-0 sudo[192461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:07 compute-0 python3.9[192463]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:07 compute-0 sudo[192461]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:07 compute-0 sudo[192616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibzsfaspweszbmbhzsblfctnbuyprueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839687.5555801-360-7289264274476/AnsiballZ_systemd.py'
Jan 31 06:08:07 compute-0 sudo[192616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:08 compute-0 python3.9[192618]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:08 compute-0 systemd[1]: Reloading.
Jan 31 06:08:08 compute-0 systemd-rc-local-generator[192646]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:08 compute-0 systemd-sysv-generator[192649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:08 compute-0 sudo[192616]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:08 compute-0 ceph-mon[75251]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:08 compute-0 sudo[192806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjnjnnmaawgngxefoymxyeoifipcpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839688.7243097-396-104296324679406/AnsiballZ_systemd.py'
Jan 31 06:08:08 compute-0 sudo[192806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:09 compute-0 python3.9[192808]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 06:08:09 compute-0 systemd[1]: Reloading.
Jan 31 06:08:09 compute-0 systemd-rc-local-generator[192834]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:08:09 compute-0 systemd-sysv-generator[192837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:08:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:09 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 06:08:09 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 06:08:09 compute-0 sudo[192806]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:10 compute-0 sudo[192999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdmaqxtbnehbxsnfcaqqjvmgdtygybq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839689.9250412-404-79903147163557/AnsiballZ_systemd.py'
Jan 31 06:08:10 compute-0 sudo[192999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:10 compute-0 python3.9[193001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:10 compute-0 sudo[192999]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:10 compute-0 sudo[193154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqmeonhtwmidxcgefqszcjauqorefnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839690.61974-404-200101898275558/AnsiballZ_systemd.py'
Jan 31 06:08:10 compute-0 sudo[193154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:10 compute-0 ceph-mon[75251]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:11 compute-0 python3.9[193156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:11 compute-0 sudo[193154]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:11 compute-0 sudo[193309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdyotqpfxqwlomxpokikdlpjgusamabz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839691.3702242-404-23411781339675/AnsiballZ_systemd.py'
Jan 31 06:08:11 compute-0 sudo[193309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:11 compute-0 python3.9[193311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:11 compute-0 sudo[193309]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:12 compute-0 ceph-mon[75251]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:12 compute-0 sudo[193464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxokbmizolnlfhxkbptgdkkjyuuizsrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839692.1134086-404-28493109115798/AnsiballZ_systemd.py'
Jan 31 06:08:12 compute-0 sudo[193464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:12 compute-0 python3.9[193466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:12 compute-0 sudo[193464]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:13 compute-0 sudo[193619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlczdyvmmnqyxyfslqrvawyydjecvdde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839692.8042626-404-207139294708226/AnsiballZ_systemd.py'
Jan 31 06:08:13 compute-0 sudo[193619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:13 compute-0 python3.9[193621]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:13 compute-0 sudo[193619]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:13 compute-0 ceph-mon[75251]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:14 compute-0 sudo[193774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhdekewthqgriwgppsscmhshrtvqxxtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839693.7445395-404-50932685836748/AnsiballZ_systemd.py'
Jan 31 06:08:14 compute-0 sudo[193774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:14 compute-0 sudo[193777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:08:14 compute-0 sudo[193777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:14 compute-0 sudo[193777]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:14 compute-0 sudo[193802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:08:14 compute-0 sudo[193802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:14 compute-0 python3.9[193776]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:14 compute-0 sudo[193774]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:15 compute-0 sudo[194011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyptvhasjcjqiuaqstnesqmkpesaljdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839694.7501154-404-258452557494316/AnsiballZ_systemd.py'
Jan 31 06:08:15 compute-0 sudo[194011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:15 compute-0 sudo[193802]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:08:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:08:15 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:08:15 compute-0 sudo[194014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:08:15 compute-0 sudo[194014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:15 compute-0 sudo[194014]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:15 compute-0 sudo[194039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:08:15 compute-0 sudo[194039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:15 compute-0 python3.9[194013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:15 compute-0 sudo[194011]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.494716623 +0000 UTC m=+0.022118455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.677663126 +0000 UTC m=+0.205064938 container create 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:08:15 compute-0 sudo[194243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgdvivbrbjmvrjwntgrrocchtavpzgef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839695.477087-404-268334452121440/AnsiballZ_systemd.py'
Jan 31 06:08:15 compute-0 sudo[194243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:15 compute-0 systemd[1]: Started libpod-conmon-8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a.scope.
Jan 31 06:08:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.958438383 +0000 UTC m=+0.485840225 container init 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.963524719 +0000 UTC m=+0.490926531 container start 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 31 06:08:15 compute-0 bold_shtern[194248]: 167 167
Jan 31 06:08:15 compute-0 systemd[1]: libpod-8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a.scope: Deactivated successfully.
Jan 31 06:08:15 compute-0 conmon[194248]: conmon 8914a7caa6032a5bd5fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a.scope/container/memory.events
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.967588355 +0000 UTC m=+0.494990197 container attach 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 06:08:15 compute-0 podman[194125]: 2026-01-31 06:08:15.968607765 +0000 UTC m=+0.496009577 container died 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dba5693b2b58a7578e343a18f401462c90abe03c4b11a4006b71e84170c7592-merged.mount: Deactivated successfully.
Jan 31 06:08:16 compute-0 podman[194125]: 2026-01-31 06:08:16.059486759 +0000 UTC m=+0.586888571 container remove 8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 06:08:16 compute-0 systemd[1]: libpod-conmon-8914a7caa6032a5bd5fd2fa78fbfedaf0fe6d451d367d401ba3cc4b530987d5a.scope: Deactivated successfully.
Jan 31 06:08:16 compute-0 python3.9[194245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.173786815 +0000 UTC m=+0.038036411 container create 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:08:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:08:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:08:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:08:16 compute-0 ceph-mon[75251]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:16 compute-0 sudo[194243]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:16 compute-0 systemd[1]: Started libpod-conmon-4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0.scope.
Jan 31 06:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.240783985 +0000 UTC m=+0.105033601 container init 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.246367875 +0000 UTC m=+0.110617471 container start 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.155079529 +0000 UTC m=+0.019329145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.253245232 +0000 UTC m=+0.117494858 container attach 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:08:16 compute-0 sudo[194456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmlnfngmppgoetfuxzbrpxxpfzndxzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839696.3322191-404-273142740659336/AnsiballZ_systemd.py'
Jan 31 06:08:16 compute-0 sudo[194456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:16 compute-0 dreamy_swartz[194292]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:08:16 compute-0 dreamy_swartz[194292]: --> All data devices are unavailable
Jan 31 06:08:16 compute-0 systemd[1]: libpod-4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0.scope: Deactivated successfully.
Jan 31 06:08:16 compute-0 podman[194272]: 2026-01-31 06:08:16.704861026 +0000 UTC m=+0.569110652 container died 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:08:16 compute-0 python3.9[194459]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:16 compute-0 sudo[194456]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0692d7602a3ea6aa7258d6d8003d409087bb12277baac460888366ec598ed921-merged.mount: Deactivated successfully.
Jan 31 06:08:17 compute-0 podman[194272]: 2026-01-31 06:08:17.389527237 +0000 UTC m=+1.253776833 container remove 4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_swartz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:08:17 compute-0 sudo[194629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saxbtmkoekyeyuwrlbkohweeaxtueicr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839697.2032118-404-252398249123822/AnsiballZ_systemd.py'
Jan 31 06:08:17 compute-0 sudo[194039]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:17 compute-0 sudo[194629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:17 compute-0 sudo[194632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:08:17 compute-0 sudo[194632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:17 compute-0 sudo[194632]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:17 compute-0 systemd[1]: libpod-conmon-4275a89a32d42741fd1d998a2e572580b21a661b5decb909ddf81ebceb57ccb0.scope: Deactivated successfully.
Jan 31 06:08:17 compute-0 sudo[194657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:08:17 compute-0 sudo[194657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:17 compute-0 python3.9[194631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:17 compute-0 podman[194696]: 2026-01-31 06:08:17.759970324 +0000 UTC m=+0.058522528 container create 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 06:08:17 compute-0 systemd[1]: Started libpod-conmon-32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563.scope.
Jan 31 06:08:17 compute-0 sudo[194629]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:17 compute-0 podman[194696]: 2026-01-31 06:08:17.719804103 +0000 UTC m=+0.018356327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:17 compute-0 podman[194696]: 2026-01-31 06:08:17.986288301 +0000 UTC m=+0.284840525 container init 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:08:17 compute-0 podman[194696]: 2026-01-31 06:08:17.991381767 +0000 UTC m=+0.289933971 container start 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 06:08:17 compute-0 amazing_wu[194715]: 167 167
Jan 31 06:08:17 compute-0 systemd[1]: libpod-32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563.scope: Deactivated successfully.
Jan 31 06:08:18 compute-0 podman[194696]: 2026-01-31 06:08:18.095444179 +0000 UTC m=+0.393996403 container attach 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:08:18 compute-0 podman[194696]: 2026-01-31 06:08:18.096440938 +0000 UTC m=+0.394993192 container died 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:08:18 compute-0 sudo[194883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtzkzyiphwnwsqkqnflpavxhnkqcdwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839697.922352-404-4587820219607/AnsiballZ_systemd.py'
Jan 31 06:08:18 compute-0 sudo[194883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:18 compute-0 python3.9[194885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:18 compute-0 sudo[194883]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ab01970d2de7448707ae311a4905f8112cd275cfcf00270dc1993ca8fccc21-merged.mount: Deactivated successfully.
Jan 31 06:08:18 compute-0 sudo[195039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krehqbknptwmukwpofgzykyrmhvzbyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839698.6044672-404-13991365797671/AnsiballZ_systemd.py'
Jan 31 06:08:18 compute-0 sudo[195039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:18 compute-0 ceph-mon[75251]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:19 compute-0 podman[194696]: 2026-01-31 06:08:19.102743728 +0000 UTC m=+1.401295932 container remove 32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_wu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:08:19 compute-0 systemd[1]: libpod-conmon-32706c06e7bad91e6177bdbe8ac21b5bbd7261abd39dbd858b2514f0e1f2e563.scope: Deactivated successfully.
Jan 31 06:08:19 compute-0 python3.9[195041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:19 compute-0 sudo[195039]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:19 compute-0 podman[195049]: 2026-01-31 06:08:19.342317044 +0000 UTC m=+0.159023818 container create 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:08:19 compute-0 podman[195049]: 2026-01-31 06:08:19.249307939 +0000 UTC m=+0.066014743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:19 compute-0 sudo[195215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklfzmgdjvkyxeygkxfemfahblfrriic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839699.3675017-404-252446748546123/AnsiballZ_systemd.py'
Jan 31 06:08:19 compute-0 sudo[195215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:19 compute-0 systemd[1]: Started libpod-conmon-85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7.scope.
Jan 31 06:08:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6506ff7b3106babf79ff3687ac5d61e7fbf9f240a088c246febc9902bcd1ef08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6506ff7b3106babf79ff3687ac5d61e7fbf9f240a088c246febc9902bcd1ef08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6506ff7b3106babf79ff3687ac5d61e7fbf9f240a088c246febc9902bcd1ef08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6506ff7b3106babf79ff3687ac5d61e7fbf9f240a088c246febc9902bcd1ef08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:19 compute-0 podman[195049]: 2026-01-31 06:08:19.749075532 +0000 UTC m=+0.565782326 container init 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:08:19 compute-0 podman[195049]: 2026-01-31 06:08:19.755253159 +0000 UTC m=+0.571959933 container start 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 06:08:19 compute-0 podman[195049]: 2026-01-31 06:08:19.779505734 +0000 UTC m=+0.596212538 container attach 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:08:19 compute-0 python3.9[195217]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]: {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     "0": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "devices": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "/dev/loop3"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             ],
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_name": "ceph_lv0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_size": "21470642176",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "name": "ceph_lv0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "tags": {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_name": "ceph",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.crush_device_class": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.encrypted": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.objectstore": "bluestore",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_id": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.vdo": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.with_tpm": "0"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             },
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "vg_name": "ceph_vg0"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         }
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     ],
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     "1": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "devices": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "/dev/loop4"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             ],
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_name": "ceph_lv1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_size": "21470642176",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "name": "ceph_lv1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "tags": {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_name": "ceph",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.crush_device_class": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.encrypted": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.objectstore": "bluestore",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_id": "1",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.vdo": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.with_tpm": "0"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             },
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "vg_name": "ceph_vg1"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         }
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     ],
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     "2": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "devices": [
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "/dev/loop5"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             ],
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_name": "ceph_lv2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_size": "21470642176",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "name": "ceph_lv2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "tags": {
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.cluster_name": "ceph",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.crush_device_class": "",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.encrypted": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.objectstore": "bluestore",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osd_id": "2",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.vdo": "0",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:                 "ceph.with_tpm": "0"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             },
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "type": "block",
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:             "vg_name": "ceph_vg2"
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:         }
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]:     ]
Jan 31 06:08:19 compute-0 hungry_ritchie[195220]: }
Jan 31 06:08:20 compute-0 systemd[1]: libpod-85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7.scope: Deactivated successfully.
Jan 31 06:08:20 compute-0 podman[195049]: 2026-01-31 06:08:20.016582109 +0000 UTC m=+0.833288893 container died 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:08:20 compute-0 sudo[195215]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:20 compute-0 ceph-mon[75251]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6506ff7b3106babf79ff3687ac5d61e7fbf9f240a088c246febc9902bcd1ef08-merged.mount: Deactivated successfully.
Jan 31 06:08:20 compute-0 sudo[195403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxijhfdpclneixhjkqznsjkcgleyxuie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839700.1607056-404-201128021434956/AnsiballZ_systemd.py'
Jan 31 06:08:20 compute-0 sudo[195403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:20 compute-0 podman[195049]: 2026-01-31 06:08:20.524835556 +0000 UTC m=+1.341542330 container remove 85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_ritchie, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:08:20 compute-0 sudo[194657]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:20 compute-0 systemd[1]: libpod-conmon-85916c50c5647afecb61291c139cf7839af56b30d79dbd834d8f7a004bcbf2f7.scope: Deactivated successfully.
Jan 31 06:08:20 compute-0 sudo[195406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:08:20 compute-0 sudo[195406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:20 compute-0 sudo[195406]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:20 compute-0 podman[195342]: 2026-01-31 06:08:20.618770128 +0000 UTC m=+0.298072044 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:08:20 compute-0 sudo[195444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:08:20 compute-0 sudo[195444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:20 compute-0 python3.9[195405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 06:08:20 compute-0 sudo[195403]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:20 compute-0 podman[195510]: 2026-01-31 06:08:20.853626648 +0000 UTC m=+0.017065570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:21 compute-0 podman[195510]: 2026-01-31 06:08:21.085247256 +0000 UTC m=+0.248686158 container create 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:08:21 compute-0 systemd[1]: Started libpod-conmon-341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174.scope.
Jan 31 06:08:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:21 compute-0 sudo[195654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvrikydizrriveqswsaihhfwdalommgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839701.1180806-506-57414954541100/AnsiballZ_file.py'
Jan 31 06:08:21 compute-0 sudo[195654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:21 compute-0 podman[195510]: 2026-01-31 06:08:21.385412369 +0000 UTC m=+0.548851301 container init 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:08:21 compute-0 podman[195510]: 2026-01-31 06:08:21.391100782 +0000 UTC m=+0.554539684 container start 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:08:21 compute-0 gallant_proskuriakova[195614]: 167 167
Jan 31 06:08:21 compute-0 systemd[1]: libpod-341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174.scope: Deactivated successfully.
Jan 31 06:08:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:21 compute-0 python3.9[195656]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:21 compute-0 podman[195510]: 2026-01-31 06:08:21.527403678 +0000 UTC m=+0.690842600 container attach 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:08:21 compute-0 podman[195510]: 2026-01-31 06:08:21.529790797 +0000 UTC m=+0.693229699 container died 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:08:21 compute-0 sudo[195654]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4895680ed9cea32fa4d22fca86f40bfae6756c2a5faa0b4d8a6702e4acaaa803-merged.mount: Deactivated successfully.
Jan 31 06:08:21 compute-0 sudo[195820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icezywvptgglwgbmqfcnjvzqvokrkhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839701.6778588-506-15664317213675/AnsiballZ_file.py'
Jan 31 06:08:21 compute-0 sudo[195820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:22 compute-0 python3.9[195822]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:22 compute-0 sudo[195820]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:22 compute-0 podman[195510]: 2026-01-31 06:08:22.42000695 +0000 UTC m=+1.583445862 container remove 341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:08:22 compute-0 systemd[1]: libpod-conmon-341161fdde172180a8548512ccc70b0ff12bfc82b35dd33ce71df8b7ad527174.scope: Deactivated successfully.
Jan 31 06:08:22 compute-0 podman[195901]: 2026-01-31 06:08:22.529368264 +0000 UTC m=+0.052433503 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 06:08:22 compute-0 podman[195943]: 2026-01-31 06:08:22.574851398 +0000 UTC m=+0.050197140 container create 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:08:22 compute-0 systemd[1]: Started libpod-conmon-3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f.scope.
Jan 31 06:08:22 compute-0 podman[195943]: 2026-01-31 06:08:22.545230949 +0000 UTC m=+0.020576711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:08:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6acbcf1c1c415a9ca512b156857152d52ea9240858d79738aa281304a372c4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6acbcf1c1c415a9ca512b156857152d52ea9240858d79738aa281304a372c4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6acbcf1c1c415a9ca512b156857152d52ea9240858d79738aa281304a372c4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6acbcf1c1c415a9ca512b156857152d52ea9240858d79738aa281304a372c4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:08:22 compute-0 sudo[196014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opatfhbgicqzfplzldhdsxxnblakjuhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839702.3629777-506-245091885147684/AnsiballZ_file.py'
Jan 31 06:08:22 compute-0 sudo[196014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:22 compute-0 ceph-mon[75251]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:22 compute-0 podman[195943]: 2026-01-31 06:08:22.823454613 +0000 UTC m=+0.298800445 container init 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:08:22 compute-0 podman[195943]: 2026-01-31 06:08:22.830798213 +0000 UTC m=+0.306143965 container start 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:08:22 compute-0 python3.9[196016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:22 compute-0 sudo[196014]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:22 compute-0 podman[195943]: 2026-01-31 06:08:22.886617113 +0000 UTC m=+0.361962875 container attach 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:08:23 compute-0 sudo[196195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktunfhejxqckiuzwixxxnjsdkimzrdgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839702.9808888-506-89929491429668/AnsiballZ_file.py'
Jan 31 06:08:23 compute-0 sudo[196195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:23 compute-0 python3.9[196203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:23 compute-0 lvm[196242]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:08:23 compute-0 lvm[196242]: VG ceph_vg0 finished
Jan 31 06:08:23 compute-0 lvm[196245]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:08:23 compute-0 lvm[196245]: VG ceph_vg1 finished
Jan 31 06:08:23 compute-0 sudo[196195]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:23 compute-0 lvm[196247]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:08:23 compute-0 lvm[196247]: VG ceph_vg2 finished
Jan 31 06:08:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:23 compute-0 hungry_raman[195985]: {}
Jan 31 06:08:23 compute-0 systemd[1]: libpod-3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f.scope: Deactivated successfully.
Jan 31 06:08:23 compute-0 podman[195943]: 2026-01-31 06:08:23.516672371 +0000 UTC m=+0.992018133 container died 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:08:23 compute-0 sudo[196410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwmltyafljhdslpxzkwseezfsktggawl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839703.4677804-506-201971349996004/AnsiballZ_file.py'
Jan 31 06:08:23 compute-0 sudo[196410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6acbcf1c1c415a9ca512b156857152d52ea9240858d79738aa281304a372c4f-merged.mount: Deactivated successfully.
Jan 31 06:08:23 compute-0 podman[195943]: 2026-01-31 06:08:23.761308342 +0000 UTC m=+1.236654094 container remove 3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:08:23 compute-0 systemd[1]: libpod-conmon-3d97a80a026d422626067a06f67015fcfd7644ca892655d6bf3cdec0cb0e364f.scope: Deactivated successfully.
Jan 31 06:08:23 compute-0 sudo[195444]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:08:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:08:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:23 compute-0 python3.9[196412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:23 compute-0 sudo[196416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:08:23 compute-0 sudo[196416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:08:23 compute-0 sudo[196416]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:23 compute-0 sudo[196410]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:24 compute-0 sudo[196590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwatjnfidcihqlnygrectdwprhjammry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839703.9882584-506-254549414293105/AnsiballZ_file.py'
Jan 31 06:08:24 compute-0 sudo[196590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:24 compute-0 python3.9[196592]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:08:24 compute-0 sudo[196590]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:24 compute-0 ceph-mon[75251]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:08:24 compute-0 python3.9[196742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:08:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:25 compute-0 sudo[196892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbrquklqxxvzdvqiojsjsuljabcjtatk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839705.1438546-557-205294763549771/AnsiballZ_stat.py'
Jan 31 06:08:25 compute-0 sudo[196892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:25 compute-0 python3.9[196894]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:25 compute-0 ceph-mon[75251]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:25 compute-0 sudo[196892]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:26 compute-0 sudo[197017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sisgqtcmxilwcqajunoqjuqreafwfvyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839705.1438546-557-205294763549771/AnsiballZ_copy.py'
Jan 31 06:08:26 compute-0 sudo[197017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:26 compute-0 python3.9[197019]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839705.1438546-557-205294763549771/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:26 compute-0 sudo[197017]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:27 compute-0 sudo[197169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghasjjottsjclvyqmvugtjkzkfrqvwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839706.8690398-557-116535423597616/AnsiballZ_stat.py'
Jan 31 06:08:27 compute-0 sudo[197169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:27 compute-0 python3.9[197171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:27 compute-0 sudo[197169]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:27 compute-0 sudo[197294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsjwmkwrxipoebsdutqokozenxxvpudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839706.8690398-557-116535423597616/AnsiballZ_copy.py'
Jan 31 06:08:27 compute-0 sudo[197294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:27 compute-0 python3.9[197296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839706.8690398-557-116535423597616/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:27 compute-0 sudo[197294]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:28 compute-0 sudo[197446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdsqsuorzrywcnhzxuejairlsfvfexr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839707.9126277-557-263835531320272/AnsiballZ_stat.py'
Jan 31 06:08:28 compute-0 sudo[197446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:28 compute-0 python3.9[197448]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:28 compute-0 sudo[197446]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:28 compute-0 ceph-mon[75251]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:28 compute-0 sudo[197571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axvadfyzlmenvdrdolwxmlryvextlqgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839707.9126277-557-263835531320272/AnsiballZ_copy.py'
Jan 31 06:08:28 compute-0 sudo[197571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:28 compute-0 python3.9[197573]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839707.9126277-557-263835531320272/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:28 compute-0 sudo[197571]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:29 compute-0 sudo[197723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwqvszpurkbcagqrwfkrgnpktxebmgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839709.0121028-557-37930920710399/AnsiballZ_stat.py'
Jan 31 06:08:29 compute-0 sudo[197723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:29 compute-0 python3.9[197725]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:29 compute-0 sudo[197723]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:29 compute-0 sudo[197848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smxkecbhfwbfmavawferppayvuokggjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839709.0121028-557-37930920710399/AnsiballZ_copy.py'
Jan 31 06:08:29 compute-0 sudo[197848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:30 compute-0 python3.9[197850]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839709.0121028-557-37930920710399/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:30 compute-0 sudo[197848]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:30 compute-0 sudo[198000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgupvaxzrstzaqomxezhsbsxosaauobg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839710.182665-557-214712835303954/AnsiballZ_stat.py'
Jan 31 06:08:30 compute-0 sudo[198000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:30 compute-0 python3.9[198002]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:30 compute-0 sudo[198000]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:30 compute-0 ceph-mon[75251]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:30 compute-0 sudo[198125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tobmhoibzedadgjznjlqytrujiydjflg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839710.182665-557-214712835303954/AnsiballZ_copy.py'
Jan 31 06:08:30 compute-0 sudo[198125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:31 compute-0 python3.9[198127]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839710.182665-557-214712835303954/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:31 compute-0 sudo[198125]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:31 compute-0 sudo[198277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwoyorixzuceasrporednhlrgizadoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839711.3398743-557-151714795956232/AnsiballZ_stat.py'
Jan 31 06:08:31 compute-0 sudo[198277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:31 compute-0 python3.9[198279]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:31 compute-0 sudo[198277]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:31 compute-0 ceph-mon[75251]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:32 compute-0 sudo[198402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zocssyfddqsvykjdlawijsataustdzdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839711.3398743-557-151714795956232/AnsiballZ_copy.py'
Jan 31 06:08:32 compute-0 sudo[198402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:32 compute-0 python3.9[198404]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839711.3398743-557-151714795956232/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:32 compute-0 sudo[198402]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:32 compute-0 sudo[198554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weslyyvcsjmjgqnebxoncwxjshhblwxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839712.3940976-557-156353558673257/AnsiballZ_stat.py'
Jan 31 06:08:32 compute-0 sudo[198554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:32 compute-0 python3.9[198556]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:32 compute-0 sudo[198554]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:33 compute-0 sudo[198677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwdablbtjbjkxqmlduzfvmyozvsjmyzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839712.3940976-557-156353558673257/AnsiballZ_copy.py'
Jan 31 06:08:33 compute-0 sudo[198677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:33 compute-0 python3.9[198679]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839712.3940976-557-156353558673257/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:33 compute-0 sudo[198677]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:33 compute-0 sudo[198829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axisibfpvvmymyayqeevddlaiqmilbai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839713.3985322-557-151788329679947/AnsiballZ_stat.py'
Jan 31 06:08:33 compute-0 sudo[198829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:33 compute-0 python3.9[198831]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:33 compute-0 sudo[198829]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:34 compute-0 sudo[198954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqasihojpzubrvimgxidgwhpqyljedrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839713.3985322-557-151788329679947/AnsiballZ_copy.py'
Jan 31 06:08:34 compute-0 sudo[198954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:34 compute-0 python3.9[198956]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769839713.3985322-557-151788329679947/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:34 compute-0 sudo[198954]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:34 compute-0 ceph-mon[75251]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:34 compute-0 sudo[199106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqstxgfpgdsyvbosrytkrngtxxrhjnnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839714.5196464-670-96065025292393/AnsiballZ_command.py'
Jan 31 06:08:34 compute-0 sudo[199106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:34 compute-0 python3.9[199108]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 06:08:34 compute-0 sudo[199106]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:35 compute-0 sudo[199259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuhrlspwjghklfnpsvontyourwzpxyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839715.1115236-679-270679046804735/AnsiballZ_file.py'
Jan 31 06:08:35 compute-0 sudo[199259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:35 compute-0 python3.9[199261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:35 compute-0 sudo[199259]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:35 compute-0 sudo[199411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuywpdlixjuzmeikduorxdesmlkuqsfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839715.6167967-679-156680294051771/AnsiballZ_file.py'
Jan 31 06:08:35 compute-0 sudo[199411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:35 compute-0 python3.9[199413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:36 compute-0 sudo[199411]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:36 compute-0 sudo[199563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkenxmjltbkigdcbtbjqbvoteyiuzss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839716.1042004-679-188057435533515/AnsiballZ_file.py'
Jan 31 06:08:36 compute-0 sudo[199563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:36 compute-0 python3.9[199565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:36 compute-0 sudo[199563]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:36 compute-0 ceph-mon[75251]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:36 compute-0 sudo[199715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxrgmwklnmkredobzbziuwkehjbgxyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839716.6076243-679-84341087891434/AnsiballZ_file.py'
Jan 31 06:08:36 compute-0 sudo[199715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:36 compute-0 python3.9[199717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:36 compute-0 sudo[199715]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:37 compute-0 sudo[199867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlfejfdropeuetdwkurxpeumfigcazpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839717.0924-679-175199394255987/AnsiballZ_file.py'
Jan 31 06:08:37 compute-0 sudo[199867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:37 compute-0 python3.9[199869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:37 compute-0 sudo[199867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:37 compute-0 sudo[200019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucfyqgwcmrzkfhwaobbxprrdrfomkgll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839717.60839-679-229720939836750/AnsiballZ_file.py'
Jan 31 06:08:37 compute-0 sudo[200019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:37 compute-0 ceph-mon[75251]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:38 compute-0 python3.9[200021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:38 compute-0 sudo[200019]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:38 compute-0 sudo[200171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmhggxjlffssmtggahtqnhecztbmmpni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839718.151334-679-138779622964618/AnsiballZ_file.py'
Jan 31 06:08:38 compute-0 sudo[200171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:38 compute-0 python3.9[200173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:38 compute-0 sudo[200171]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:38 compute-0 sudo[200323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gydwheycavuxommvaajrsydlgajbmzgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839718.6806898-679-105653125425199/AnsiballZ_file.py'
Jan 31 06:08:38 compute-0 sudo[200323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:39 compute-0 python3.9[200325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:39 compute-0 sudo[200323]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:39 compute-0 sudo[200475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlwjhqyphcvizyhpxbloloczgvwvieyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839719.18005-679-160667038217270/AnsiballZ_file.py'
Jan 31 06:08:39 compute-0 sudo[200475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:39 compute-0 python3.9[200477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:39 compute-0 sudo[200475]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:39 compute-0 sudo[200627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxwowxzacawxbanarekvnpbomqfgipcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839719.7249703-679-248671814845440/AnsiballZ_file.py'
Jan 31 06:08:39 compute-0 sudo[200627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:40 compute-0 python3.9[200629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:40 compute-0 sudo[200627]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:40 compute-0 sudo[200779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnaqgapnfizdoyboopufjbdcswuvgxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839720.2734697-679-73191649605794/AnsiballZ_file.py'
Jan 31 06:08:40 compute-0 sudo[200779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:40 compute-0 python3.9[200781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:40 compute-0 sudo[200779]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:40 compute-0 ceph-mon[75251]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:41 compute-0 sudo[200931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzcfmawwmiborywtmbwovvpxefedpwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839720.8294315-679-7001755124673/AnsiballZ_file.py'
Jan 31 06:08:41 compute-0 sudo[200931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:41 compute-0 python3.9[200933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:41 compute-0 sudo[200931]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:41 compute-0 sudo[201083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjwynvjkpgwsjjuxmiukvlajsfjjfjwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839721.3232775-679-150775716120879/AnsiballZ_file.py'
Jan 31 06:08:41 compute-0 sudo[201083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:41 compute-0 python3.9[201085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:41 compute-0 sudo[201083]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:42 compute-0 sudo[201235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gziwblcfwxakqqgrshcyufeydmahsslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839721.8277762-679-642773100008/AnsiballZ_file.py'
Jan 31 06:08:42 compute-0 sudo[201235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:42 compute-0 python3.9[201237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:42 compute-0 sudo[201235]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:42 compute-0 ceph-mon[75251]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:42 compute-0 sudo[201387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muckshzmjjfcdwxlodbffmeusarmaefh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839722.371892-778-273673468234964/AnsiballZ_stat.py'
Jan 31 06:08:42 compute-0 sudo[201387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:42 compute-0 python3.9[201389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:42 compute-0 sudo[201387]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:43 compute-0 sudo[201510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ictvlxqicuprrmshureomvfnzkowepos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839722.371892-778-273673468234964/AnsiballZ_copy.py'
Jan 31 06:08:43 compute-0 sudo[201510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:43 compute-0 python3.9[201512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839722.371892-778-273673468234964/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:43 compute-0 sudo[201510]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:43 compute-0 sudo[201662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drpdvcvehbmzbfxgwddalhnwoxwfemrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839723.4814684-778-238390604127037/AnsiballZ_stat.py'
Jan 31 06:08:43 compute-0 sudo[201662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:43 compute-0 python3.9[201664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:43 compute-0 sudo[201662]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:44 compute-0 sudo[201785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apccudznmyobnkbsfdoawabwfnlhtzof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839723.4814684-778-238390604127037/AnsiballZ_copy.py'
Jan 31 06:08:44 compute-0 sudo[201785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:44 compute-0 python3.9[201787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839723.4814684-778-238390604127037/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:44 compute-0 sudo[201785]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:08:44
Jan 31 06:08:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:08:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:08:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'vms', 'images', '.mgr']
Jan 31 06:08:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:08:44 compute-0 sudo[201937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfchyqmlmubthkslacszzwtzgvtrgbto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839724.4433594-778-261221060623351/AnsiballZ_stat.py'
Jan 31 06:08:44 compute-0 sudo[201937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:44 compute-0 python3.9[201939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:44 compute-0 sudo[201937]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:44 compute-0 ceph-mon[75251]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:45 compute-0 sudo[202060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizsgqdxdqjvfojfurzcyyjsurgbvnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839724.4433594-778-261221060623351/AnsiballZ_copy.py'
Jan 31 06:08:45 compute-0 sudo[202060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:45 compute-0 python3.9[202062]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839724.4433594-778-261221060623351/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:45 compute-0 sudo[202060]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:08:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:45 compute-0 sudo[202212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqhcdmmeppbrcyasolrvrqskpzmxhegt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839725.37361-778-9010808551312/AnsiballZ_stat.py'
Jan 31 06:08:45 compute-0 sudo[202212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:45 compute-0 python3.9[202214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:45 compute-0 sudo[202212]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:46 compute-0 ceph-mon[75251]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:46 compute-0 sudo[202335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojaxvlavlbpcuiuhxmskbsmvmeygnnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839725.37361-778-9010808551312/AnsiballZ_copy.py'
Jan 31 06:08:46 compute-0 sudo[202335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:46 compute-0 python3.9[202337]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839725.37361-778-9010808551312/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:46 compute-0 sudo[202335]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:46 compute-0 sudo[202487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlobqrhdevyzzaljtossczpxaqgomvod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839726.4966016-778-188767640290431/AnsiballZ_stat.py'
Jan 31 06:08:46 compute-0 sudo[202487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:46 compute-0 python3.9[202489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:46 compute-0 sudo[202487]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:47 compute-0 sudo[202610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxgjyacmuseyfadsubujgfwxpwrghefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839726.4966016-778-188767640290431/AnsiballZ_copy.py'
Jan 31 06:08:47 compute-0 sudo[202610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:47 compute-0 python3.9[202612]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839726.4966016-778-188767640290431/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:47 compute-0 sudo[202610]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:47 compute-0 sudo[202762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpasjnyrjfkfcqadquskykibyjzbpjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839727.5776699-778-36773312828007/AnsiballZ_stat.py'
Jan 31 06:08:47 compute-0 sudo[202762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:47 compute-0 ceph-mon[75251]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:48 compute-0 python3.9[202764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:48 compute-0 sudo[202762]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:48 compute-0 sudo[202885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrzigmvtopcjdfsrywbwubskvlfgrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839727.5776699-778-36773312828007/AnsiballZ_copy.py'
Jan 31 06:08:48 compute-0 sudo[202885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:48 compute-0 python3.9[202887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839727.5776699-778-36773312828007/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:48 compute-0 sudo[202885]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:48 compute-0 sudo[203037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyfvabsizplhxbrkufrcnkbwoxmkcvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839728.7271457-778-27767710334231/AnsiballZ_stat.py'
Jan 31 06:08:48 compute-0 sudo[203037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:49 compute-0 python3.9[203039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:49 compute-0 sudo[203037]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:49 compute-0 sudo[203160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwjjovykjsguzgaaihwjqrbxhllpjsqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839728.7271457-778-27767710334231/AnsiballZ_copy.py'
Jan 31 06:08:49 compute-0 sudo[203160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:49 compute-0 python3.9[203162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839728.7271457-778-27767710334231/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:49 compute-0 sudo[203160]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:50 compute-0 sudo[203312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmpnjmzoxpyvrvjtklvhsstgyhrfrakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839729.8326278-778-132022259028050/AnsiballZ_stat.py'
Jan 31 06:08:50 compute-0 sudo[203312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:50 compute-0 python3.9[203314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:50 compute-0 sudo[203312]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:08:50.202 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:08:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:08:50.203 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:08:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:08:50.204 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:08:50 compute-0 sudo[203435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplyvhwvtouijiughntpabpnkdqmrqwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839729.8326278-778-132022259028050/AnsiballZ_copy.py'
Jan 31 06:08:50 compute-0 sudo[203435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:50 compute-0 ceph-mon[75251]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:50 compute-0 python3.9[203437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839729.8326278-778-132022259028050/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:50 compute-0 sudo[203435]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:51 compute-0 podman[203537]: 2026-01-31 06:08:51.159486958 +0000 UTC m=+0.077073060 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:08:51 compute-0 sudo[203612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdsiehkaewfwmngctnvjjandwzyemapn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839730.9511843-778-13933210559738/AnsiballZ_stat.py'
Jan 31 06:08:51 compute-0 sudo[203612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:51 compute-0 python3.9[203616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:51 compute-0 sudo[203612]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:51 compute-0 sudo[203737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajorvlcdbmctranwbpnvtbutfqrdmbkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839730.9511843-778-13933210559738/AnsiballZ_copy.py'
Jan 31 06:08:51 compute-0 sudo[203737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:51 compute-0 python3.9[203739]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839730.9511843-778-13933210559738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:51 compute-0 sudo[203737]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:51 compute-0 ceph-mon[75251]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:52 compute-0 sudo[203889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptmjhixxdznnnlbcvouzbuvzkdbzbme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839731.9909184-778-145845948929527/AnsiballZ_stat.py'
Jan 31 06:08:52 compute-0 sudo[203889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:52 compute-0 python3.9[203891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:52 compute-0 sudo[203889]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:52 compute-0 sudo[204022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyliqlzsucfkwbyidgtllfuukeoaulef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839731.9909184-778-145845948929527/AnsiballZ_copy.py'
Jan 31 06:08:52 compute-0 sudo[204022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:52 compute-0 podman[203986]: 2026-01-31 06:08:52.852840709 +0000 UTC m=+0.067275339 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 06:08:53 compute-0 python3.9[204031]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839731.9909184-778-145845948929527/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:53 compute-0 sudo[204022]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:53 compute-0 sudo[204183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkuflirczddhdcajpvqbdfqxcezurypb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839733.202858-778-279800612219877/AnsiballZ_stat.py'
Jan 31 06:08:53 compute-0 sudo[204183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:53 compute-0 python3.9[204185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:53 compute-0 sudo[204183]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:53 compute-0 sudo[204306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopxbqfoltxuvkrrmnfkotxffmyfoshr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839733.202858-778-279800612219877/AnsiballZ_copy.py'
Jan 31 06:08:53 compute-0 sudo[204306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:54 compute-0 python3.9[204308]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839733.202858-778-279800612219877/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:54 compute-0 sudo[204306]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:54 compute-0 sudo[204458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkhfxjqkhsfnrnkvjdgtuwijptbcbhxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839734.226404-778-9691195414584/AnsiballZ_stat.py'
Jan 31 06:08:54 compute-0 sudo[204458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:54 compute-0 python3.9[204460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:54 compute-0 sudo[204458]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:55 compute-0 ceph-mon[75251]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:55 compute-0 sudo[204581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugwloxtmlnuactwrqbhprcvnoxaoxlzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839734.226404-778-9691195414584/AnsiballZ_copy.py'
Jan 31 06:08:55 compute-0 sudo[204581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:55 compute-0 python3.9[204583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839734.226404-778-9691195414584/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:55 compute-0 sudo[204581]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:55 compute-0 sudo[204733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hozfkjrvtthqpnufvbqvxheqhvwhpxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839735.4061775-778-231471307052792/AnsiballZ_stat.py'
Jan 31 06:08:55 compute-0 sudo[204733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:08:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:08:55 compute-0 python3.9[204735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:55 compute-0 sudo[204733]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:56 compute-0 sudo[204856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gymlmcsdwylwliiprzcfujpfvlasodgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839735.4061775-778-231471307052792/AnsiballZ_copy.py'
Jan 31 06:08:56 compute-0 sudo[204856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:56 compute-0 ceph-mon[75251]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:56 compute-0 python3.9[204858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839735.4061775-778-231471307052792/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:56 compute-0 sudo[204856]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:56 compute-0 sudo[205008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkkiindrfmjbbmmjkeizypguirxudzpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839736.5133576-778-105280158889555/AnsiballZ_stat.py'
Jan 31 06:08:56 compute-0 sudo[205008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:56 compute-0 python3.9[205010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:08:56 compute-0 sudo[205008]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:57 compute-0 sudo[205131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omeyqehuqwktedonuuenzzfseivqgbqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839736.5133576-778-105280158889555/AnsiballZ_copy.py'
Jan 31 06:08:57 compute-0 sudo[205131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:57 compute-0 python3.9[205133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839736.5133576-778-105280158889555/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:08:57 compute-0 sudo[205131]: pam_unix(sudo:session): session closed for user root
Jan 31 06:08:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:57 compute-0 ceph-mon[75251]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:08:57 compute-0 python3.9[205283]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:08:58 compute-0 sudo[205436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvpakkdgvpfgvminsercdfjautncktc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839738.290406-984-10758332489596/AnsiballZ_seboolean.py'
Jan 31 06:08:58 compute-0 sudo[205436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:08:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:08:58 compute-0 python3.9[205438]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 06:08:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:01 compute-0 ceph-mon[75251]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:01 compute-0 sudo[205436]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:01 compute-0 auditd[706]: Audit daemon rotating log files
Jan 31 06:09:01 compute-0 sudo[205592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppmqtalwckfgmlqoctvqnwtkdxhrnwtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839741.7296422-992-280405147247917/AnsiballZ_copy.py'
Jan 31 06:09:01 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 06:09:01 compute-0 sudo[205592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:02 compute-0 python3.9[205594]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:02 compute-0 sudo[205592]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:02 compute-0 sudo[205744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrflysxqjoskbnsnhymnlrgawwhrzon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839742.244864-992-133672084291853/AnsiballZ_copy.py'
Jan 31 06:09:02 compute-0 sudo[205744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:02 compute-0 ceph-mon[75251]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:02 compute-0 python3.9[205746]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:02 compute-0 sudo[205744]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:03 compute-0 sudo[205896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukvzlfrnbhfdxcfyxlfcormwsnfyrfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839742.8302283-992-265740136179804/AnsiballZ_copy.py'
Jan 31 06:09:03 compute-0 sudo[205896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:03 compute-0 python3.9[205898]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:03 compute-0 sudo[205896]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:03 compute-0 sudo[206048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alkpgjcnmuvunahlzkhjesabovuzguai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839743.365358-992-166382488015423/AnsiballZ_copy.py'
Jan 31 06:09:03 compute-0 sudo[206048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:03 compute-0 python3.9[206050]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:03 compute-0 sudo[206048]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:03 compute-0 ceph-mon[75251]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:04 compute-0 sudo[206200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouzhhdfcdzzzvrvristmwgaptkvofbwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839743.95287-992-4507955612924/AnsiballZ_copy.py'
Jan 31 06:09:04 compute-0 sudo[206200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:04 compute-0 python3.9[206202]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:04 compute-0 sudo[206200]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:04 compute-0 sudo[206352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nevirzshpwbujjydfdhjqzvlfycxubtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839744.5258791-1028-59072757424554/AnsiballZ_copy.py'
Jan 31 06:09:04 compute-0 sudo[206352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:04 compute-0 python3.9[206354]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:04 compute-0 sudo[206352]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:05 compute-0 sudo[206504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nordawoafcnyygtagdmsedjlhxrgdbzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839745.0716581-1028-65475602253958/AnsiballZ_copy.py'
Jan 31 06:09:05 compute-0 sudo[206504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:05 compute-0 python3.9[206506]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:05 compute-0 sudo[206504]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:05 compute-0 sudo[206656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naixwwpwlnmqvpixvwxzcevlbkzxymwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839745.6995149-1028-216355354206305/AnsiballZ_copy.py'
Jan 31 06:09:05 compute-0 sudo[206656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:06 compute-0 python3.9[206658]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:06 compute-0 sudo[206656]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:06 compute-0 sudo[206808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhsskdhoxblfixexvvgqrmvsndclszc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839746.2447224-1028-33903241440982/AnsiballZ_copy.py'
Jan 31 06:09:06 compute-0 sudo[206808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:06 compute-0 ceph-mon[75251]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:06 compute-0 python3.9[206810]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:06 compute-0 sudo[206808]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:07 compute-0 sudo[206960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvdlvktzcccgglqwhwppgfsweoxdifho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839746.8109338-1028-194703360101010/AnsiballZ_copy.py'
Jan 31 06:09:07 compute-0 sudo[206960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:07 compute-0 python3.9[206962]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:07 compute-0 sudo[206960]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:07 compute-0 sudo[207112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnelgohxhlcliffdbaxzozkdzgkqfzzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839747.4922829-1064-99376714726180/AnsiballZ_systemd.py'
Jan 31 06:09:07 compute-0 sudo[207112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:08 compute-0 python3.9[207114]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:09:08 compute-0 systemd[1]: Reloading.
Jan 31 06:09:08 compute-0 systemd-sysv-generator[207140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:08 compute-0 systemd-rc-local-generator[207137]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:08 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 06:09:08 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 06:09:08 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 06:09:08 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 06:09:08 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 31 06:09:08 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 31 06:09:08 compute-0 sudo[207112]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:08 compute-0 ceph-mon[75251]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:09 compute-0 sudo[207305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjejiidakqmycfihxhvjxiyxephmdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839748.775033-1064-177512113734939/AnsiballZ_systemd.py'
Jan 31 06:09:09 compute-0 sudo[207305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:09 compute-0 python3.9[207307]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:09:09 compute-0 systemd[1]: Reloading.
Jan 31 06:09:09 compute-0 systemd-sysv-generator[207333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:09 compute-0 systemd-rc-local-generator[207325]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:09 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 06:09:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 06:09:09 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 06:09:09 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 06:09:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 06:09:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 06:09:09 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 06:09:09 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 06:09:09 compute-0 sudo[207305]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:09 compute-0 ceph-mon[75251]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:10 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 06:09:10 compute-0 sudo[207521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsduxboxhidyywlvmvathqimvcnstypi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839749.8617759-1064-6081646325256/AnsiballZ_systemd.py'
Jan 31 06:09:10 compute-0 sudo[207521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:10 compute-0 python3.9[207523]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:09:10 compute-0 systemd[1]: Reloading.
Jan 31 06:09:10 compute-0 systemd-rc-local-generator[207549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:10 compute-0 systemd-sysv-generator[207552]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:10 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 06:09:10 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 06:09:10 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 06:09:10 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 06:09:10 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 06:09:10 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 06:09:10 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 06:09:10 compute-0 sudo[207521]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:10 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 06:09:10 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 06:09:11 compute-0 sudo[207740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfglehoasgaziojmqilqmmyhtbqfvptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839750.9565542-1064-162270361658081/AnsiballZ_systemd.py'
Jan 31 06:09:11 compute-0 sudo[207740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:11 compute-0 python3.9[207742]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:09:11 compute-0 systemd[1]: Reloading.
Jan 31 06:09:11 compute-0 systemd-rc-local-generator[207767]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:11 compute-0 systemd-sysv-generator[207770]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:11 compute-0 setroubleshoot[207470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1cafb9b8-9bc1-45df-8834-ec4a4eb440c4
Jan 31 06:09:11 compute-0 setroubleshoot[207470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 06:09:11 compute-0 setroubleshoot[207470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 1cafb9b8-9bc1-45df-8834-ec4a4eb440c4
Jan 31 06:09:11 compute-0 setroubleshoot[207470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 06:09:11 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 06:09:11 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 06:09:11 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 06:09:11 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 06:09:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 06:09:11 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 06:09:11 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 06:09:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 06:09:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 06:09:11 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 06:09:11 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 06:09:11 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 06:09:11 compute-0 sudo[207740]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:12 compute-0 sudo[207955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peayssnkmetdqddtmdzogjkkjnpqrsgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839752.091337-1064-113208949437237/AnsiballZ_systemd.py'
Jan 31 06:09:12 compute-0 sudo[207955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:12 compute-0 ceph-mon[75251]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:12 compute-0 python3.9[207957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:09:12 compute-0 systemd[1]: Reloading.
Jan 31 06:09:12 compute-0 systemd-sysv-generator[207982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:12 compute-0 systemd-rc-local-generator[207978]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:13 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 06:09:13 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 06:09:13 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 06:09:13 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 06:09:13 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 06:09:13 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 06:09:13 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 06:09:13 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 06:09:13 compute-0 sudo[207955]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:13 compute-0 sudo[208166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epyaequhmmpgdgpnxosqsovskchfluer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839753.3213384-1101-170092201905788/AnsiballZ_file.py'
Jan 31 06:09:13 compute-0 sudo[208166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:13 compute-0 python3.9[208168]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:13 compute-0 sudo[208166]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:14 compute-0 ceph-mon[75251]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:14 compute-0 sudo[208318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwirjqybxfljigvjlawzfbseqgpexgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839753.9293933-1109-140683967598071/AnsiballZ_find.py'
Jan 31 06:09:14 compute-0 sudo[208318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:14 compute-0 python3.9[208320]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:09:14 compute-0 sudo[208318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:14 compute-0 sudo[208470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpfwzbnumzkfobnsiwqncsqledepfozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839754.5985825-1117-184844661404287/AnsiballZ_command.py'
Jan 31 06:09:14 compute-0 sudo[208470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:15 compute-0 python3.9[208472]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:15 compute-0 sudo[208470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:15 compute-0 python3.9[208626]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:09:16 compute-0 python3.9[208776]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:17 compute-0 ceph-mon[75251]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:17 compute-0 python3.9[208897]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839756.1429908-1136-246856622108095/.source.xml follow=False _original_basename=secret.xml.j2 checksum=586df18515c1a997dc9931be986ba8653ae45240 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:17 compute-0 sudo[209047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwotziqojwxcdetphcculxdpcicpjoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839757.3912702-1151-30880049703406/AnsiballZ_command.py'
Jan 31 06:09:17 compute-0 sudo[209047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:17 compute-0 python3.9[209049]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 797ee2fc-ca49-5eee-87c0-542bb035a7d7
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:17 compute-0 polkitd[43528]: Registered Authentication Agent for unix-process:209051:314232 (system bus name :1.2543 [pkttyagent --process 209051 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 06:09:17 compute-0 polkitd[43528]: Unregistered Authentication Agent for unix-process:209051:314232 (system bus name :1.2543, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 06:09:17 compute-0 polkitd[43528]: Registered Authentication Agent for unix-process:209050:314231 (system bus name :1.2544 [pkttyagent --process 209050 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 06:09:17 compute-0 polkitd[43528]: Unregistered Authentication Agent for unix-process:209050:314231 (system bus name :1.2544, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 06:09:18 compute-0 sudo[209047]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:18 compute-0 ceph-mon[75251]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:18 compute-0 python3.9[209211]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:19 compute-0 sudo[209361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsnekebbnmgxlahohsknnkgnjlfxzbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839758.83779-1167-187082047864154/AnsiballZ_command.py'
Jan 31 06:09:19 compute-0 sudo[209361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:19 compute-0 sudo[209361]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:19 compute-0 sudo[209514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eauchaiiohnzuxvbufpgobzfqqblssuw ; FSID=797ee2fc-ca49-5eee-87c0-542bb035a7d7 KEY=AQCtmH1pAAAAABAAje//P7iwPlyQKUe9kDxc/g== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839759.5773013-1175-224317974646449/AnsiballZ_command.py'
Jan 31 06:09:19 compute-0 sudo[209514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:20 compute-0 polkitd[43528]: Registered Authentication Agent for unix-process:209517:314463 (system bus name :1.2547 [pkttyagent --process 209517 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 06:09:20 compute-0 polkitd[43528]: Unregistered Authentication Agent for unix-process:209517:314463 (system bus name :1.2547, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 06:09:20 compute-0 sudo[209514]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:20 compute-0 sudo[209672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehxihgynsyehjtucetqlgkwvjmxoqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839760.558854-1183-73499351053693/AnsiballZ_copy.py'
Jan 31 06:09:20 compute-0 sudo[209672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:20 compute-0 ceph-mon[75251]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:21 compute-0 python3.9[209674]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:21 compute-0 sudo[209672]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:21 compute-0 sudo[209834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffvnfrqplvaceftynyumqkhdpmumjct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839761.344544-1191-30357773149031/AnsiballZ_stat.py'
Jan 31 06:09:21 compute-0 sudo[209834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:21 compute-0 podman[209798]: 2026-01-31 06:09:21.699029466 +0000 UTC m=+0.130356048 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:09:21 compute-0 python3.9[209843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:21 compute-0 sudo[209834]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:21 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 06:09:21 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 06:09:22 compute-0 sudo[209973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syuuhjfduwwkajbstkcdzpnjiaqlclik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839761.344544-1191-30357773149031/AnsiballZ_copy.py'
Jan 31 06:09:22 compute-0 sudo[209973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:22 compute-0 ceph-mon[75251]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:22 compute-0 python3.9[209975]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839761.344544-1191-30357773149031/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:22 compute-0 sudo[209973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:23 compute-0 podman[210075]: 2026-01-31 06:09:23.131630752 +0000 UTC m=+0.048598121 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 06:09:23 compute-0 sudo[210145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zslnkrsmosguljsjvuwhcyblabqpooyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839762.8186543-1207-44943045236121/AnsiballZ_file.py'
Jan 31 06:09:23 compute-0 sudo[210145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:23 compute-0 python3.9[210147]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:23 compute-0 sudo[210145]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:23 compute-0 sudo[210320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okqlbpszqqikpbxcrtyfadaeemmdvofw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839763.6227188-1215-218870878350852/AnsiballZ_stat.py'
Jan 31 06:09:23 compute-0 sudo[210320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:23 compute-0 sudo[210277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:09:23 compute-0 sudo[210277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:23 compute-0 sudo[210277]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:24 compute-0 sudo[210325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:09:24 compute-0 sudo[210325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:24 compute-0 python3.9[210323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:24 compute-0 sudo[210320]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:24 compute-0 sudo[210454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmbexsgenmtkqihirekmukdflmredvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839763.6227188-1215-218870878350852/AnsiballZ_file.py'
Jan 31 06:09:24 compute-0 sudo[210454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:24 compute-0 sudo[210325]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:09:24 compute-0 python3.9[210459]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:24 compute-0 sudo[210454]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:09:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:09:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:09:24 compute-0 sudo[210536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:09:24 compute-0 sudo[210536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:24 compute-0 sudo[210536]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:25 compute-0 sudo[210562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:09:25 compute-0 sudo[210562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:25 compute-0 ceph-mon[75251]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:09:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:09:25 compute-0 sudo[210659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidltvvlohxcsotmylfzhzcvihjdoacn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839764.8627305-1227-141192493896822/AnsiballZ_stat.py'
Jan 31 06:09:25 compute-0 sudo[210659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:25 compute-0 python3.9[210661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:25 compute-0 sudo[210659]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.229481871 +0000 UTC m=+0.018588581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:25 compute-0 sudo[210764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gufylfgoimcheqvlqunmbskkgcaftymo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839764.8627305-1227-141192493896822/AnsiballZ_file.py'
Jan 31 06:09:25 compute-0 sudo[210764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.512975094 +0000 UTC m=+0.302081794 container create 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 06:09:25 compute-0 systemd[1]: Started libpod-conmon-8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd.scope.
Jan 31 06:09:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:25 compute-0 python3.9[210766]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._b2xiwof recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:25 compute-0 sudo[210764]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.762423104 +0000 UTC m=+0.551529824 container init 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.770826409 +0000 UTC m=+0.559933109 container start 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:25 compute-0 fervent_feynman[210769]: 167 167
Jan 31 06:09:25 compute-0 systemd[1]: libpod-8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd.scope: Deactivated successfully.
Jan 31 06:09:25 compute-0 conmon[210769]: conmon 8a4cb1a605ea9ea1bdb9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd.scope/container/memory.events
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.800878109 +0000 UTC m=+0.589984849 container attach 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:09:25 compute-0 podman[210675]: 2026-01-31 06:09:25.802064723 +0000 UTC m=+0.591171433 container died 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fedbf4028da382083f5158dc8f5879c5f551a0c3152b3998e1b0415318cb69f3-merged.mount: Deactivated successfully.
Jan 31 06:09:26 compute-0 sudo[210935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbodyqsunzkycjoejxbakjcxzdnmxcwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839766.156713-1239-35010687676197/AnsiballZ_stat.py'
Jan 31 06:09:26 compute-0 sudo[210935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:26 compute-0 python3.9[210937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:26 compute-0 sudo[210935]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:26 compute-0 sudo[211013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptptofsnreecpyafirfanijzvkoazdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839766.156713-1239-35010687676197/AnsiballZ_file.py'
Jan 31 06:09:26 compute-0 sudo[211013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:09:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:09:26 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:09:26 compute-0 ceph-mon[75251]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:27 compute-0 python3.9[211015]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:27 compute-0 sudo[211013]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:27 compute-0 sudo[211165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztjwqyoxttpagnialvyrkjuzdrjkexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839767.2211194-1252-11251359810775/AnsiballZ_command.py'
Jan 31 06:09:27 compute-0 sudo[211165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:27 compute-0 python3.9[211167]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:27 compute-0 sudo[211165]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:28 compute-0 sudo[211318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvguagminvwgtcrwfntyrhfytgbjyvu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839768.000944-1260-221418969235398/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 06:09:28 compute-0 sudo[211318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:28 compute-0 ceph-mon[75251]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:28 compute-0 podman[210675]: 2026-01-31 06:09:28.958739499 +0000 UTC m=+3.747846189 container remove 8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_feynman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:28 compute-0 python3[211320]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 06:09:29 compute-0 sudo[211318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:29 compute-0 systemd[1]: libpod-conmon-8a4cb1a605ea9ea1bdb9749c9e46e8890c65713f33ef728dc858d7a69a6bc2bd.scope: Deactivated successfully.
Jan 31 06:09:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:29 compute-0 podman[211352]: 2026-01-31 06:09:29.109710193 +0000 UTC m=+0.022312475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:29 compute-0 podman[211352]: 2026-01-31 06:09:29.258724522 +0000 UTC m=+0.171326754 container create ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 06:09:29 compute-0 systemd[1]: Started libpod-conmon-ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf.scope.
Jan 31 06:09:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:29 compute-0 sudo[211496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjuqmqieurlvtmzyfgnqkirqusacbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839769.184572-1268-123422554463025/AnsiballZ_stat.py'
Jan 31 06:09:29 compute-0 sudo[211496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:29 compute-0 podman[211352]: 2026-01-31 06:09:29.5584957 +0000 UTC m=+0.471097902 container init ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:29 compute-0 podman[211352]: 2026-01-31 06:09:29.570041323 +0000 UTC m=+0.482643525 container start ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:09:29 compute-0 podman[211352]: 2026-01-31 06:09:29.601035271 +0000 UTC m=+0.513637483 container attach ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:09:29 compute-0 python3.9[211498]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:29 compute-0 sudo[211496]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:29 compute-0 sudo[211588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynuwtpmgcfvaypetmtzvxwambzyviyjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839769.184572-1268-123422554463025/AnsiballZ_file.py'
Jan 31 06:09:29 compute-0 sudo[211588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:29 compute-0 festive_mendel[211443]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:09:29 compute-0 festive_mendel[211443]: --> All data devices are unavailable
Jan 31 06:09:30 compute-0 systemd[1]: libpod-ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf.scope: Deactivated successfully.
Jan 31 06:09:30 compute-0 podman[211352]: 2026-01-31 06:09:30.03522538 +0000 UTC m=+0.947827572 container died ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 06:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2e024f9d7d0e25d73ea6b9b477bdc89bba59c946e9f77e6ff79865b4b546851-merged.mount: Deactivated successfully.
Jan 31 06:09:30 compute-0 podman[211352]: 2026-01-31 06:09:30.129442116 +0000 UTC m=+1.042044308 container remove ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:09:30 compute-0 systemd[1]: libpod-conmon-ccf72162f5dd6127747ae0d5a1e0406706b249dd8ef9899e1e731f1b0bdbceaf.scope: Deactivated successfully.
Jan 31 06:09:30 compute-0 sudo[210562]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:30 compute-0 python3.9[211591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:30 compute-0 sudo[211588]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:30 compute-0 sudo[211606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:09:30 compute-0 sudo[211606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:30 compute-0 sudo[211606]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:30 compute-0 sudo[211636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:09:30 compute-0 sudo[211636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:30 compute-0 ceph-mon[75251]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.473666838 +0000 UTC m=+0.022023477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:30 compute-0 sudo[211831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekpwhdbnvpxpfulaxczumtyggbelwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839770.3254626-1280-113484243828194/AnsiballZ_stat.py'
Jan 31 06:09:30 compute-0 sudo[211831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.598052268 +0000 UTC m=+0.146408847 container create 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:09:30 compute-0 systemd[1]: Started libpod-conmon-9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921.scope.
Jan 31 06:09:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.750963077 +0000 UTC m=+0.299319676 container init 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.755772792 +0000 UTC m=+0.304129371 container start 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 31 06:09:30 compute-0 festive_mirzakhani[211836]: 167 167
Jan 31 06:09:30 compute-0 systemd[1]: libpod-9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921.scope: Deactivated successfully.
Jan 31 06:09:30 compute-0 python3.9[211833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:30 compute-0 sudo[211831]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.903772993 +0000 UTC m=+0.452129592 container attach 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:09:30 compute-0 podman[211758]: 2026-01-31 06:09:30.904087522 +0000 UTC m=+0.452444101 container died 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 06:09:31 compute-0 sudo[211976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbrsiomkrdtncqzefipxaklbfyqgkvys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839770.3254626-1280-113484243828194/AnsiballZ_copy.py'
Jan 31 06:09:31 compute-0 sudo[211976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-97dbc2866a85e9a13290da7fa5e56968c5c3222cf48c49a87db7f4bc752518be-merged.mount: Deactivated successfully.
Jan 31 06:09:31 compute-0 python3.9[211978]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839770.3254626-1280-113484243828194/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:31 compute-0 sudo[211976]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:31 compute-0 podman[211758]: 2026-01-31 06:09:31.609393887 +0000 UTC m=+1.157750506 container remove 9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:09:31 compute-0 systemd[1]: libpod-conmon-9b1b5c659562089cf9a39e3f35a7386c33f8e2ceddbd24017c54c4ca86e6b921.scope: Deactivated successfully.
Jan 31 06:09:31 compute-0 podman[212066]: 2026-01-31 06:09:31.742506512 +0000 UTC m=+0.020485905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:31 compute-0 sudo[212148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicytwrygqohvaxwmkgjunhmhmvmfzmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839771.6087675-1295-272445379096645/AnsiballZ_stat.py'
Jan 31 06:09:31 compute-0 sudo[212148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:31 compute-0 podman[212066]: 2026-01-31 06:09:31.853056655 +0000 UTC m=+0.131036028 container create b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 31 06:09:32 compute-0 python3.9[212150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:32 compute-0 sudo[212148]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:32 compute-0 sudo[212228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffwazftveqzcojwiwjubpgxcfganhgnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839771.6087675-1295-272445379096645/AnsiballZ_file.py'
Jan 31 06:09:32 compute-0 sudo[212228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:32 compute-0 ceph-mon[75251]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:32 compute-0 systemd[1]: Started libpod-conmon-b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c.scope.
Jan 31 06:09:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c7f146bf6164ca7dac0f5a425048858c759084a55d4844e498f9fa8a0131d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c7f146bf6164ca7dac0f5a425048858c759084a55d4844e498f9fa8a0131d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c7f146bf6164ca7dac0f5a425048858c759084a55d4844e498f9fa8a0131d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33c7f146bf6164ca7dac0f5a425048858c759084a55d4844e498f9fa8a0131d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:32 compute-0 python3.9[212230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:32 compute-0 podman[212066]: 2026-01-31 06:09:32.528770251 +0000 UTC m=+0.806749634 container init b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:09:32 compute-0 podman[212066]: 2026-01-31 06:09:32.534020178 +0000 UTC m=+0.811999551 container start b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:32 compute-0 sudo[212228]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:32 compute-0 podman[212066]: 2026-01-31 06:09:32.696060182 +0000 UTC m=+0.974039565 container attach b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]: {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     "0": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "devices": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "/dev/loop3"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             ],
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_name": "ceph_lv0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_size": "21470642176",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "name": "ceph_lv0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "tags": {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_name": "ceph",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.crush_device_class": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.encrypted": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.objectstore": "bluestore",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_id": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.vdo": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.with_tpm": "0"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             },
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "vg_name": "ceph_vg0"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         }
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     ],
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     "1": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "devices": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "/dev/loop4"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             ],
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_name": "ceph_lv1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_size": "21470642176",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "name": "ceph_lv1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "tags": {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_name": "ceph",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.crush_device_class": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.encrypted": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.objectstore": "bluestore",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_id": "1",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.vdo": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.with_tpm": "0"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             },
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "vg_name": "ceph_vg1"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         }
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     ],
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     "2": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "devices": [
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "/dev/loop5"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             ],
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_name": "ceph_lv2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_size": "21470642176",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "name": "ceph_lv2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "tags": {
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.cluster_name": "ceph",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.crush_device_class": "",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.encrypted": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.objectstore": "bluestore",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osd_id": "2",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.vdo": "0",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:                 "ceph.with_tpm": "0"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             },
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "type": "block",
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:             "vg_name": "ceph_vg2"
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:         }
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]:     ]
Jan 31 06:09:32 compute-0 suspicious_mahavira[212233]: }
Jan 31 06:09:32 compute-0 systemd[1]: libpod-b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c.scope: Deactivated successfully.
Jan 31 06:09:32 compute-0 podman[212066]: 2026-01-31 06:09:32.812574352 +0000 UTC m=+1.090553725 container died b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:09:32 compute-0 sudo[212402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjaprqmabfynmxshgovpadwsqnxfuen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839772.694643-1307-120591000781134/AnsiballZ_stat.py'
Jan 31 06:09:32 compute-0 sudo[212402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:33 compute-0 python3.9[212404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:33 compute-0 sudo[212402]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-33c7f146bf6164ca7dac0f5a425048858c759084a55d4844e498f9fa8a0131d5-merged.mount: Deactivated successfully.
Jan 31 06:09:33 compute-0 sudo[212481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyvguqtoxxqhswazdjkarmmlwmbjckl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839772.694643-1307-120591000781134/AnsiballZ_file.py'
Jan 31 06:09:33 compute-0 sudo[212481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:33 compute-0 python3.9[212483]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:33 compute-0 sudo[212481]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:33 compute-0 podman[212066]: 2026-01-31 06:09:33.697443682 +0000 UTC m=+1.975423195 container remove b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mahavira, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:09:33 compute-0 sudo[211636]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:33 compute-0 systemd[1]: libpod-conmon-b29537dbf4a0f490060561e3ba2266a8933830face9606b83360316807d19c3c.scope: Deactivated successfully.
Jan 31 06:09:33 compute-0 sudo[212512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:09:33 compute-0 sudo[212512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:33 compute-0 sudo[212512]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:33 compute-0 sudo[212562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:09:33 compute-0 sudo[212562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:34 compute-0 sudo[212707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavmdamwasklrkugcljzlgcbomazjylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839773.7659197-1319-109924609090758/AnsiballZ_stat.py'
Jan 31 06:09:34 compute-0 sudo[212707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.047678752 +0000 UTC m=+0.022944133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:34 compute-0 python3.9[212711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:34 compute-0 sudo[212707]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.314438057 +0000 UTC m=+0.289703428 container create 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:34 compute-0 systemd[1]: Started libpod-conmon-09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a.scope.
Jan 31 06:09:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.54544639 +0000 UTC m=+0.520711801 container init 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.55187971 +0000 UTC m=+0.527145081 container start 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:09:34 compute-0 dazzling_noyce[212763]: 167 167
Jan 31 06:09:34 compute-0 systemd[1]: libpod-09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a.scope: Deactivated successfully.
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.610585123 +0000 UTC m=+0.585850484 container attach 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:09:34 compute-0 podman[212669]: 2026-01-31 06:09:34.6111893 +0000 UTC m=+0.586454671 container died 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:09:34 compute-0 sudo[212852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmmekkwrkhfkwpxepxfbxbvtbrygxisu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839773.7659197-1319-109924609090758/AnsiballZ_copy.py'
Jan 31 06:09:34 compute-0 sudo[212852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:34 compute-0 ceph-mon[75251]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:34 compute-0 python3.9[212854]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769839773.7659197-1319-109924609090758/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed71ce9c6db723de55fbbb92c3e529b3d160332d08a3c84fd5592147f4a1a63d-merged.mount: Deactivated successfully.
Jan 31 06:09:34 compute-0 sudo[212852]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:35 compute-0 sudo[213007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokgmziptakgurrjlxuadpkydqqjxsvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839775.0363834-1334-167168479306951/AnsiballZ_file.py'
Jan 31 06:09:35 compute-0 sudo[213007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:35 compute-0 python3.9[213009]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:35 compute-0 sudo[213007]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:35 compute-0 sudo[213159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmcxlzuzvcvqbmjjxeiepatwusyqjbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839775.6222606-1342-40152022482674/AnsiballZ_command.py'
Jan 31 06:09:35 compute-0 sudo[213159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:36 compute-0 python3.9[213161]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:36 compute-0 podman[212669]: 2026-01-31 06:09:36.048849075 +0000 UTC m=+2.024114446 container remove 09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:09:36 compute-0 systemd[1]: libpod-conmon-09723c0016ea6a42bffa60921e4c2b97532d402d97c75898aa6edd4dc473a41a.scope: Deactivated successfully.
Jan 31 06:09:36 compute-0 sudo[213159]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:36 compute-0 ceph-mon[75251]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:36 compute-0 podman[213196]: 2026-01-31 06:09:36.189919312 +0000 UTC m=+0.020509965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:09:36 compute-0 podman[213196]: 2026-01-31 06:09:36.531237842 +0000 UTC m=+0.361828535 container create c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:09:36 compute-0 sudo[213335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfanljpqievlkhfdeegheseztnihjat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839776.213424-1350-223808076302663/AnsiballZ_blockinfile.py'
Jan 31 06:09:36 compute-0 sudo[213335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:36 compute-0 python3.9[213337]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:36 compute-0 systemd[1]: Started libpod-conmon-c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e.scope.
Jan 31 06:09:36 compute-0 sudo[213335]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af80cb97fe9021c424be8e5343e0d7f802cb757b5e001516a48e5faa8c3f476d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af80cb97fe9021c424be8e5343e0d7f802cb757b5e001516a48e5faa8c3f476d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af80cb97fe9021c424be8e5343e0d7f802cb757b5e001516a48e5faa8c3f476d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af80cb97fe9021c424be8e5343e0d7f802cb757b5e001516a48e5faa8c3f476d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:37 compute-0 sudo[213493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udricqukpvmlwrvwewjvzijogmywuldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839777.1168244-1359-229744469349679/AnsiballZ_command.py'
Jan 31 06:09:37 compute-0 sudo[213493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:37 compute-0 podman[213196]: 2026-01-31 06:09:37.376828983 +0000 UTC m=+1.207419656 container init c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:09:37 compute-0 podman[213196]: 2026-01-31 06:09:37.385588968 +0000 UTC m=+1.216179631 container start c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:09:37 compute-0 podman[213196]: 2026-01-31 06:09:37.470760621 +0000 UTC m=+1.301351274 container attach c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:09:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:37 compute-0 python3.9[213495]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:37 compute-0 sudo[213493]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:37 compute-0 lvm[213672]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:09:37 compute-0 lvm[213673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:09:37 compute-0 lvm[213673]: VG ceph_vg1 finished
Jan 31 06:09:37 compute-0 lvm[213672]: VG ceph_vg0 finished
Jan 31 06:09:37 compute-0 lvm[213698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:09:37 compute-0 lvm[213698]: VG ceph_vg2 finished
Jan 31 06:09:38 compute-0 sudo[213725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycrwljvdepuyytjyaznfnqmmyvdljoqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839777.8424072-1367-209846550450013/AnsiballZ_stat.py'
Jan 31 06:09:38 compute-0 sudo[213725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:38 compute-0 funny_babbage[213341]: {}
Jan 31 06:09:38 compute-0 systemd[1]: libpod-c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e.scope: Deactivated successfully.
Jan 31 06:09:38 compute-0 systemd[1]: libpod-c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e.scope: Consumed 1.005s CPU time.
Jan 31 06:09:38 compute-0 podman[213196]: 2026-01-31 06:09:38.132454536 +0000 UTC m=+1.963045219 container died c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:09:38 compute-0 python3.9[213728]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:09:38 compute-0 sudo[213725]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:38 compute-0 ceph-mon[75251]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:38 compute-0 sudo[213893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcotiulfwvjdrazuvfoocjyuomibsldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839778.4665258-1375-19207229483265/AnsiballZ_command.py'
Jan 31 06:09:38 compute-0 sudo[213893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-af80cb97fe9021c424be8e5343e0d7f802cb757b5e001516a48e5faa8c3f476d-merged.mount: Deactivated successfully.
Jan 31 06:09:38 compute-0 python3.9[213895]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:09:38 compute-0 sudo[213893]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:39 compute-0 podman[213196]: 2026-01-31 06:09:39.25908245 +0000 UTC m=+3.089673123 container remove c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_babbage, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 06:09:39 compute-0 sudo[212562]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:09:39 compute-0 systemd[1]: libpod-conmon-c006b92567a44100a4793d3cc45b4273dea30bb03a10b9b4cb172181bc307b0e.scope: Deactivated successfully.
Jan 31 06:09:39 compute-0 sudo[214048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpcaohgkayagdeiozffxifeebrkbjpuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839779.1393387-1383-97828154827128/AnsiballZ_file.py'
Jan 31 06:09:39 compute-0 sudo[214048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:09:39 compute-0 python3.9[214050]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:39 compute-0 sudo[214048]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:39 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:39 compute-0 sudo[214127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:09:40 compute-0 sudo[214127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:09:40 compute-0 sudo[214127]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:40 compute-0 sudo[214225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zefseukyhhmxneebvlpkteoquvwddlld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839779.899778-1391-152365738558966/AnsiballZ_stat.py'
Jan 31 06:09:40 compute-0 sudo[214225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:40 compute-0 python3.9[214227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:40 compute-0 sudo[214225]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:40 compute-0 sudo[214348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rohgaedjjbactvfssjzlvetbtwfqdier ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839779.899778-1391-152365738558966/AnsiballZ_copy.py'
Jan 31 06:09:40 compute-0 sudo[214348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:40 compute-0 python3.9[214350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839779.899778-1391-152365738558966/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:40 compute-0 sudo[214348]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:40 compute-0 ceph-mon[75251]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:09:41 compute-0 sudo[214500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmadwobvksjezxxpcweatofldjbkchnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839781.0188134-1406-52090827872312/AnsiballZ_stat.py'
Jan 31 06:09:41 compute-0 sudo[214500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:41 compute-0 python3.9[214502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:41 compute-0 sudo[214500]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:41 compute-0 sudo[214623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veuybhrvglrbmunlhnwrgrqknbfvyqyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839781.0188134-1406-52090827872312/AnsiballZ_copy.py'
Jan 31 06:09:41 compute-0 sudo[214623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:41 compute-0 python3.9[214625]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839781.0188134-1406-52090827872312/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:41 compute-0 sudo[214623]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:42 compute-0 ceph-mon[75251]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:42 compute-0 sudo[214775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqkhmepzzrpqzjbskbpyqayjlqeldfqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839782.146773-1421-87893427482108/AnsiballZ_stat.py'
Jan 31 06:09:42 compute-0 sudo[214775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:42 compute-0 python3.9[214777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:09:42 compute-0 sudo[214775]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:42 compute-0 sudo[214898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlxuiadzszqxeenlebxymigvccifxodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839782.146773-1421-87893427482108/AnsiballZ_copy.py'
Jan 31 06:09:42 compute-0 sudo[214898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:43 compute-0 python3.9[214900]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839782.146773-1421-87893427482108/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:09:43 compute-0 sudo[214898]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:43 compute-0 sudo[215050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbqxxssspbqahtmuubvbuenuixccwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839783.2005005-1436-246375971752775/AnsiballZ_systemd.py'
Jan 31 06:09:43 compute-0 sudo[215050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:43 compute-0 python3.9[215052]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:09:43 compute-0 systemd[1]: Reloading.
Jan 31 06:09:43 compute-0 systemd-rc-local-generator[215077]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:43 compute-0 systemd-sysv-generator[215083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:44 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 06:09:44 compute-0 sudo[215050]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:09:44
Jan 31 06:09:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:09:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:09:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'images', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Jan 31 06:09:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:09:44 compute-0 sudo[215241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnnecpnuvwlfdkirptimzqvaddsedxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839784.2428703-1444-76127549775763/AnsiballZ_systemd.py'
Jan 31 06:09:44 compute-0 sudo[215241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:44 compute-0 ceph-mon[75251]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:44 compute-0 python3.9[215243]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 06:09:44 compute-0 systemd[1]: Reloading.
Jan 31 06:09:44 compute-0 systemd-rc-local-generator[215268]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:44 compute-0 systemd-sysv-generator[215272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:45 compute-0 systemd[1]: Reloading.
Jan 31 06:09:45 compute-0 systemd-sysv-generator[215310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:09:45 compute-0 systemd-rc-local-generator[215306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:09:45 compute-0 sudo[215241]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:45 compute-0 sshd-session[156211]: Connection closed by 192.168.122.30 port 34644
Jan 31 06:09:45 compute-0 sshd-session[156183]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:09:45 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 06:09:45 compute-0 systemd[1]: session-48.scope: Consumed 2min 48.823s CPU time.
Jan 31 06:09:45 compute-0 systemd-logind[797]: Session 48 logged out. Waiting for processes to exit.
Jan 31 06:09:45 compute-0 systemd-logind[797]: Removed session 48.
Jan 31 06:09:46 compute-0 ceph-mon[75251]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:48 compute-0 ceph-mon[75251]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:09:50.203 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:09:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:09:50.204 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:09:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:09:50.204 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:09:50 compute-0 ceph-mon[75251]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:52 compute-0 podman[215339]: 2026-01-31 06:09:52.197328931 +0000 UTC m=+0.125711028 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 31 06:09:52 compute-0 ceph-mon[75251]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:53 compute-0 ceph-mon[75251]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:53 compute-0 sshd-session[215365]: Accepted publickey for zuul from 192.168.122.30 port 53670 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:09:53 compute-0 systemd-logind[797]: New session 49 of user zuul.
Jan 31 06:09:53 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 31 06:09:53 compute-0 sshd-session[215365]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:09:54 compute-0 podman[215367]: 2026-01-31 06:09:54.021191563 +0000 UTC m=+0.046549393 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:09:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:54 compute-0 python3.9[215537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:09:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:09:55 compute-0 ceph-mon[75251]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:56 compute-0 python3.9[215691]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:09:56 compute-0 network[215708]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:09:56 compute-0 network[215709]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:09:56 compute-0 network[215710]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:09:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:58 compute-0 ceph-mon[75251]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:59 compute-0 sudo[215980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjqfsxjxttetqymmpooujcycozujivwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839798.845487-42-182366111760055/AnsiballZ_setup.py'
Jan 31 06:09:59 compute-0 sudo[215980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:09:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:09:59 compute-0 python3.9[215982]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:09:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:09:59 compute-0 sudo[215980]: pam_unix(sudo:session): session closed for user root
Jan 31 06:09:59 compute-0 sudo[216064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzodvwhomejhnnffqcgannbuplrfimvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839798.845487-42-182366111760055/AnsiballZ_dnf.py'
Jan 31 06:09:59 compute-0 sudo[216064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:00 compute-0 python3.9[216066]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:10:00 compute-0 ceph-mon[75251]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:02 compute-0 ceph-mon[75251]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:04 compute-0 ceph-mon[75251]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:06 compute-0 ceph-mon[75251]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:07 compute-0 sudo[216064]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:08 compute-0 ceph-mon[75251]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:08 compute-0 sudo[216217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uifkoqpdmyiyxolkroogwwclydlarmfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839808.0068016-54-20573844493855/AnsiballZ_stat.py'
Jan 31 06:10:08 compute-0 sudo[216217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:08 compute-0 python3.9[216219]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:10:08 compute-0 sudo[216217]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:09 compute-0 sudo[216369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhhmlaipkpirblxfezraufffkgapxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839808.8583834-64-126963006920164/AnsiballZ_command.py'
Jan 31 06:10:09 compute-0 sudo[216369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:09 compute-0 python3.9[216371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:09 compute-0 sudo[216369]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:09 compute-0 sudo[216522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjeftzlwiijombpoheekecfwantwfugd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839809.7197073-74-208909057196410/AnsiballZ_stat.py'
Jan 31 06:10:09 compute-0 sudo[216522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:10 compute-0 python3.9[216524]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:10:10 compute-0 sudo[216522]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:10 compute-0 sudo[216674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqvzzcopvjnschzsmhkjnuikkrmldbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839810.4846058-82-236096061492690/AnsiballZ_command.py'
Jan 31 06:10:10 compute-0 sudo[216674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:10 compute-0 ceph-mon[75251]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:10 compute-0 python3.9[216676]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:10 compute-0 sudo[216674]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:11 compute-0 sudo[216827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htfqswszghrkmnnsdjxretxhvshxurbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839811.1126258-90-151847736957309/AnsiballZ_stat.py'
Jan 31 06:10:11 compute-0 sudo[216827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:11 compute-0 python3.9[216829]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:10:11 compute-0 sudo[216827]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:11 compute-0 ceph-mon[75251]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:12 compute-0 sudo[216950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxlipmaqftibxwrkuluddfmptpgkldzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839811.1126258-90-151847736957309/AnsiballZ_copy.py'
Jan 31 06:10:12 compute-0 sudo[216950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:12 compute-0 python3.9[216952]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839811.1126258-90-151847736957309/.source.iscsi _original_basename=.nxzcf5y_ follow=False checksum=22c8aed8acac769e56316c20adf950140e9d0c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:12 compute-0 sudo[216950]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:12 compute-0 sudo[217102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znquayyploalpdxntbrilknwshlliauv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839812.4233468-105-97320687879989/AnsiballZ_file.py'
Jan 31 06:10:12 compute-0 sudo[217102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:13 compute-0 python3.9[217104]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:13 compute-0 sudo[217102]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:13 compute-0 sudo[217254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfztuuqhpaqszvkunjefwrwnmbpnnatk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839813.2197392-113-40109522835481/AnsiballZ_lineinfile.py'
Jan 31 06:10:13 compute-0 sudo[217254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:13 compute-0 python3.9[217256]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:13 compute-0 sudo[217254]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:14 compute-0 sudo[217406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxtpcrozvsnsxwzeaubwhkslowethwfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839814.0267649-122-266848403055616/AnsiballZ_systemd_service.py'
Jan 31 06:10:14 compute-0 sudo[217406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:14 compute-0 ceph-mon[75251]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:14 compute-0 python3.9[217408]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:10:14 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 06:10:14 compute-0 sudo[217406]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:15 compute-0 sudo[217562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsgfaymqxmqyyxbkowhdvcuudisdvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839815.0041423-130-84262375218429/AnsiballZ_systemd_service.py'
Jan 31 06:10:15 compute-0 sudo[217562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:15 compute-0 python3.9[217564]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:10:15 compute-0 systemd[1]: Reloading.
Jan 31 06:10:15 compute-0 systemd-rc-local-generator[217590]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:15 compute-0 systemd-sysv-generator[217595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:15 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 06:10:15 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 06:10:15 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 06:10:15 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 06:10:15 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 06:10:15 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 06:10:15 compute-0 sudo[217562]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:16 compute-0 ceph-mon[75251]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:16 compute-0 python3.9[217763]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:10:16 compute-0 network[217780]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:10:16 compute-0 network[217781]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:10:16 compute-0 network[217782]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:10:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:18 compute-0 ceph-mon[75251]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:19 compute-0 sudo[218052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zprrgmrluoocfcqrioykrxzkbpngzixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839819.1953986-153-26398940239070/AnsiballZ_dnf.py'
Jan 31 06:10:19 compute-0 sudo[218052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:19 compute-0 python3.9[218054]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:10:20 compute-0 ceph-mon[75251]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:10:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:10:21 compute-0 systemd[1]: Reloading.
Jan 31 06:10:21 compute-0 systemd-rc-local-generator[218095]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:21 compute-0 systemd-sysv-generator[218101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:21 compute-0 ceph-mon[75251]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:10:23 compute-0 sudo[218052]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:23 compute-0 podman[218218]: 2026-01-31 06:10:23.136272336 +0000 UTC m=+0.059087746 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:10:23 compute-0 sudo[218395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weihcxakridcgeovwggvnhbiosiikxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839823.2580664-162-253314104830333/AnsiballZ_file.py'
Jan 31 06:10:23 compute-0 sudo[218395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:23 compute-0 python3.9[218397]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 06:10:23 compute-0 sudo[218395]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:10:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:10:24 compute-0 systemd[1]: run-rbe213d5025f7487d9f2b8aa47365acf3.service: Deactivated successfully.
Jan 31 06:10:24 compute-0 podman[218474]: 2026-01-31 06:10:24.143375242 +0000 UTC m=+0.062048367 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:10:24 compute-0 sudo[218570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyiodstffgpanzlcdouxfieiqulvpicu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839823.847418-170-32524656283899/AnsiballZ_modprobe.py'
Jan 31 06:10:24 compute-0 sudo[218570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:24 compute-0 python3.9[218572]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 06:10:24 compute-0 sudo[218570]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:24 compute-0 ceph-mon[75251]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:24 compute-0 sudo[218726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqeaskhxackikgpdmpwegpyalujaujz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839824.6206667-178-192128162776413/AnsiballZ_stat.py'
Jan 31 06:10:24 compute-0 sudo[218726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:25 compute-0 python3.9[218728]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:10:25 compute-0 sudo[218726]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:25 compute-0 sudo[218849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzjdqerqiiunzqkefxnpheuirytovai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839824.6206667-178-192128162776413/AnsiballZ_copy.py'
Jan 31 06:10:25 compute-0 sudo[218849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:25 compute-0 python3.9[218851]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839824.6206667-178-192128162776413/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:25 compute-0 sudo[218849]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:26 compute-0 sudo[219001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlookiajumznhmtzpwtpxirbxqnnxwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839825.8115494-194-107167158332012/AnsiballZ_lineinfile.py'
Jan 31 06:10:26 compute-0 sudo[219001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:26 compute-0 python3.9[219003]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:26 compute-0 sudo[219001]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:26 compute-0 ceph-mon[75251]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:26 compute-0 sudo[219153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmmogskfpihroqrnboguqonjfhmlmtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839826.3738194-202-131104287218759/AnsiballZ_systemd.py'
Jan 31 06:10:26 compute-0 sudo[219153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:27 compute-0 python3.9[219155]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:10:27 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 06:10:27 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 06:10:27 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 06:10:27 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 06:10:27 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 06:10:27 compute-0 sudo[219153]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:27 compute-0 sudo[219309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmesgkwabbbawuwckotewtdsvgzpbjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839827.6006553-210-49911549202493/AnsiballZ_command.py'
Jan 31 06:10:27 compute-0 sudo[219309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:28 compute-0 python3.9[219311]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:28 compute-0 sudo[219309]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:28 compute-0 sudo[219462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejptmdctfvxohcaimhkmmugzpigbmoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839828.361944-220-185870897459817/AnsiballZ_stat.py'
Jan 31 06:10:28 compute-0 sudo[219462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:28 compute-0 python3.9[219464]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:10:28 compute-0 sudo[219462]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:28 compute-0 ceph-mon[75251]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:29 compute-0 sudo[219614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bshrdaiszldjzzbeoukrkxuuqaouatsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839828.9884765-229-91991244890184/AnsiballZ_stat.py'
Jan 31 06:10:29 compute-0 sudo[219614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:29 compute-0 python3.9[219616]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:10:29 compute-0 sudo[219614]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:29 compute-0 sudo[219737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqiqwfasrtjyucivcwofkhgfobjquxwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839828.9884765-229-91991244890184/AnsiballZ_copy.py'
Jan 31 06:10:29 compute-0 sudo[219737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:29 compute-0 python3.9[219739]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839828.9884765-229-91991244890184/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:29 compute-0 sudo[219737]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 compute-0 ceph-mon[75251]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:30 compute-0 sudo[219889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitvtprviibqlfpofgypuywdheaqndpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839830.0845795-244-143456359103919/AnsiballZ_command.py'
Jan 31 06:10:30 compute-0 sudo[219889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:30 compute-0 python3.9[219891]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:30 compute-0 sudo[219889]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 compute-0 sudo[220042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfszghadrsxzusrzaqbvrddmbuhzsxlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839830.6995537-252-32295235565638/AnsiballZ_lineinfile.py'
Jan 31 06:10:30 compute-0 sudo[220042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:31 compute-0 python3.9[220044]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:31 compute-0 sudo[220042]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:31 compute-0 sudo[220194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyemiwszllofqxxufofpladotixlspox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839831.367422-260-66785353562861/AnsiballZ_replace.py'
Jan 31 06:10:31 compute-0 sudo[220194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:32 compute-0 python3.9[220196]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:32 compute-0 sudo[220194]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:32 compute-0 sudo[220346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvevbuudymkdbaouqlkintupykwghnte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839832.2073498-268-265454939141056/AnsiballZ_replace.py'
Jan 31 06:10:32 compute-0 sudo[220346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:32 compute-0 python3.9[220348]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:32 compute-0 sudo[220346]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:32 compute-0 ceph-mon[75251]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:33 compute-0 sudo[220498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdnklujmwnqbcmgbfxftgoiayflfuxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839832.7897906-277-97302930028046/AnsiballZ_lineinfile.py'
Jan 31 06:10:33 compute-0 sudo[220498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:33 compute-0 python3.9[220500]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:33 compute-0 sudo[220498]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:33 compute-0 sudo[220650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspxdxjfpwynenstbrwybgeowmlnbmxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839833.4291446-277-157618384852661/AnsiballZ_lineinfile.py'
Jan 31 06:10:33 compute-0 sudo[220650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:33 compute-0 python3.9[220652]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:33 compute-0 sudo[220650]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:34 compute-0 sudo[220802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitijqdfhixjkgcsewdjuxfqmuhyusnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839833.9811096-277-21425682514581/AnsiballZ_lineinfile.py'
Jan 31 06:10:34 compute-0 sudo[220802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:34 compute-0 python3.9[220804]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:34 compute-0 sudo[220802]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:34 compute-0 ceph-mon[75251]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:34 compute-0 sudo[220954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knjvrqnfrepzqyofmvdnmhmslqeukley ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839834.5684757-277-150146943176516/AnsiballZ_lineinfile.py'
Jan 31 06:10:34 compute-0 sudo[220954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:35 compute-0 python3.9[220956]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:35 compute-0 sudo[220954]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:35 compute-0 sudo[221106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdinofznmlonbjpngcxncopghcgsbuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839835.2510471-306-219012040751356/AnsiballZ_stat.py'
Jan 31 06:10:35 compute-0 sudo[221106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:35 compute-0 python3.9[221108]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:10:35 compute-0 sudo[221106]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:35 compute-0 ceph-mon[75251]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:36 compute-0 sudo[221260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gabbjiaeriencvrcyzojgtlzkhklxepr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839835.9124427-314-64247505123206/AnsiballZ_command.py'
Jan 31 06:10:36 compute-0 sudo[221260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:36 compute-0 python3.9[221262]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:36 compute-0 sudo[221260]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:36 compute-0 sudo[221413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsnkqrxkgdbnyavlaxjkhqxkixoperh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839836.5507448-323-214321478847024/AnsiballZ_systemd_service.py'
Jan 31 06:10:36 compute-0 sudo[221413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:37 compute-0 python3.9[221415]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:10:37 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 31 06:10:37 compute-0 sudo[221413]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:37 compute-0 sudo[221569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewdwumstgkajpulkzkbtspzufceoixi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839837.445287-331-69473100310314/AnsiballZ_systemd_service.py'
Jan 31 06:10:37 compute-0 sudo[221569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:38 compute-0 python3.9[221571]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:10:38 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 06:10:38 compute-0 udevadm[221576]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 06:10:38 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 06:10:38 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 06:10:38 compute-0 ceph-mon[75251]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:38 compute-0 multipathd[221580]: --------start up--------
Jan 31 06:10:38 compute-0 multipathd[221580]: read /etc/multipath.conf
Jan 31 06:10:38 compute-0 multipathd[221580]: path checkers start up
Jan 31 06:10:38 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 06:10:38 compute-0 sudo[221569]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:39 compute-0 sudo[221737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjcumehnrlmqukrvldunqmqgzbbzhbea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839838.9305716-343-264583659121055/AnsiballZ_file.py'
Jan 31 06:10:39 compute-0 sudo[221737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:39 compute-0 python3.9[221739]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 06:10:39 compute-0 sudo[221737]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:40 compute-0 sudo[221863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:10:40 compute-0 sudo[221863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:40 compute-0 sudo[221863]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:40 compute-0 sudo[221914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpaqurxromceoszbnphacaggmvrlddfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839839.7941425-351-100879266630992/AnsiballZ_modprobe.py'
Jan 31 06:10:40 compute-0 sudo[221914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:40 compute-0 sudo[221915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:10:40 compute-0 sudo[221915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:40 compute-0 python3.9[221923]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 06:10:40 compute-0 kernel: Key type psk registered
Jan 31 06:10:40 compute-0 sudo[221914]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:40 compute-0 sudo[221915]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:10:40 compute-0 sudo[222063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:10:40 compute-0 sudo[222063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:40 compute-0 sudo[222063]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:40 compute-0 sudo[222106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:10:40 compute-0 sudo[222106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:40 compute-0 sudo[222181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljpthsznaismyksaelgsuogojjiguqsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839840.582636-359-81677654913069/AnsiballZ_stat.py'
Jan 31 06:10:40 compute-0 sudo[222181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:40 compute-0 ceph-mon[75251]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:10:40 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:10:41 compute-0 python3.9[222183]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:10:41 compute-0 sudo[222181]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:41 compute-0 podman[222196]: 2026-01-31 06:10:40.973462652 +0000 UTC m=+0.021223802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:10:41 compute-0 sudo[222330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgrdnmdibrrntfhtxtvszkunxgubqhdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839840.582636-359-81677654913069/AnsiballZ_copy.py'
Jan 31 06:10:41 compute-0 sudo[222330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:41 compute-0 podman[222196]: 2026-01-31 06:10:41.357573981 +0000 UTC m=+0.405335121 container create 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 06:10:41 compute-0 python3.9[222332]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839840.582636-359-81677654913069/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:41 compute-0 sudo[222330]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:41 compute-0 systemd[1]: Started libpod-conmon-37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a.scope.
Jan 31 06:10:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:10:41 compute-0 podman[222196]: 2026-01-31 06:10:41.945412871 +0000 UTC m=+0.993174091 container init 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:10:41 compute-0 podman[222196]: 2026-01-31 06:10:41.954063511 +0000 UTC m=+1.001824641 container start 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:10:41 compute-0 agitated_ramanujan[222390]: 167 167
Jan 31 06:10:41 compute-0 systemd[1]: libpod-37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a.scope: Deactivated successfully.
Jan 31 06:10:41 compute-0 sudo[222495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsxhxyclgnohzjqrgdcloohbxwzdaylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839841.7294655-375-234579332256383/AnsiballZ_lineinfile.py'
Jan 31 06:10:41 compute-0 sudo[222495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:42 compute-0 podman[222196]: 2026-01-31 06:10:42.153011678 +0000 UTC m=+1.200772918 container attach 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 06:10:42 compute-0 podman[222196]: 2026-01-31 06:10:42.154864249 +0000 UTC m=+1.202625489 container died 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:10:42 compute-0 python3.9[222503]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:42 compute-0 sudo[222495]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:42 compute-0 ceph-mon[75251]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:42 compute-0 sudo[222653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xorutnwhmjlogtuqrtewcbdoussbfyxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839842.4417038-383-235884754831710/AnsiballZ_systemd.py'
Jan 31 06:10:42 compute-0 sudo[222653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:43 compute-0 python3.9[222655]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:10:43 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 06:10:43 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 06:10:43 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 06:10:43 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 06:10:43 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 06:10:43 compute-0 sudo[222653]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e915a13042075ffe344f549b671af8658d478651661b159f0ab7c85a11333e-merged.mount: Deactivated successfully.
Jan 31 06:10:43 compute-0 sudo[222810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eggivzqbxaozubmbxxoxevqszwlymtrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839843.4217813-391-175557504927784/AnsiballZ_dnf.py'
Jan 31 06:10:43 compute-0 sudo[222810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:43 compute-0 python3.9[222812]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:10:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:10:44
Jan 31 06:10:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:10:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:10:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.rgw.root', '.mgr']
Jan 31 06:10:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:10:44 compute-0 ceph-mon[75251]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:10:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:45 compute-0 podman[222196]: 2026-01-31 06:10:45.565057221 +0000 UTC m=+4.612818391 container remove 37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:10:45 compute-0 systemd[1]: libpod-conmon-37117de850d1656937ea735fcc7e55d833e0f8c8899b8edfa366c122fb20893a.scope: Deactivated successfully.
Jan 31 06:10:45 compute-0 podman[222825]: 2026-01-31 06:10:45.692879368 +0000 UTC m=+0.024163863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:10:46 compute-0 podman[222825]: 2026-01-31 06:10:46.205197505 +0000 UTC m=+0.536481950 container create 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:10:46 compute-0 systemd[1]: Started libpod-conmon-473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96.scope.
Jan 31 06:10:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:46 compute-0 podman[222825]: 2026-01-31 06:10:46.910157443 +0000 UTC m=+1.241441938 container init 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:10:46 compute-0 podman[222825]: 2026-01-31 06:10:46.918059723 +0000 UTC m=+1.249344178 container start 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:10:47 compute-0 podman[222825]: 2026-01-31 06:10:47.091634794 +0000 UTC m=+1.422919259 container attach 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:10:47 compute-0 systemd[1]: Reloading.
Jan 31 06:10:47 compute-0 ceph-mon[75251]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:47 compute-0 systemd-rc-local-generator[222878]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:47 compute-0 systemd-sysv-generator[222883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:47 compute-0 thirsty_bhaskara[222841]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:10:47 compute-0 thirsty_bhaskara[222841]: --> All data devices are unavailable
Jan 31 06:10:47 compute-0 systemd[1]: Reloading.
Jan 31 06:10:47 compute-0 podman[222825]: 2026-01-31 06:10:47.410881388 +0000 UTC m=+1.742165803 container died 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 06:10:47 compute-0 systemd-rc-local-generator[222934]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:47 compute-0 systemd-sysv-generator[222937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:47 compute-0 systemd[1]: libpod-473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96.scope: Deactivated successfully.
Jan 31 06:10:47 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 06:10:47 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 06:10:47 compute-0 lvm[222978]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:10:47 compute-0 lvm[222978]: VG ceph_vg1 finished
Jan 31 06:10:47 compute-0 lvm[222973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:10:47 compute-0 lvm[222973]: VG ceph_vg0 finished
Jan 31 06:10:47 compute-0 lvm[222977]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:10:47 compute-0 lvm[222977]: VG ceph_vg2 finished
Jan 31 06:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b8e999bfa33f8390fb61be4b96b606d57f1871f0ed64906ec549c1e1967af8-merged.mount: Deactivated successfully.
Jan 31 06:10:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:10:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:10:48 compute-0 systemd[1]: Reloading.
Jan 31 06:10:49 compute-0 systemd-rc-local-generator[223024]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:49 compute-0 systemd-sysv-generator[223027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:10:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:10:50.204 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:10:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:10:50.205 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:10:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:10:50.206 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:10:50 compute-0 ceph-mon[75251]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:50 compute-0 podman[222825]: 2026-01-31 06:10:50.690470855 +0000 UTC m=+5.021755270 container remove 473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 31 06:10:50 compute-0 sudo[222106]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:50 compute-0 systemd[1]: libpod-conmon-473fe0a19e2d2138fd5a96f756646e3e434195c58aa7e7da4c86a1a5db02de96.scope: Deactivated successfully.
Jan 31 06:10:50 compute-0 sudo[223039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:10:50 compute-0 sudo[223039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:50 compute-0 sudo[223039]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:50 compute-0 sudo[223064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:10:50 compute-0 sudo[223064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:10:51 compute-0 podman[223102]: 2026-01-31 06:10:51.172824057 +0000 UTC m=+0.035534210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:10:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:51 compute-0 podman[223102]: 2026-01-31 06:10:51.600304603 +0000 UTC m=+0.463014696 container create e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:10:51 compute-0 systemd[1]: Started libpod-conmon-e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60.scope.
Jan 31 06:10:52 compute-0 ceph-mon[75251]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:10:52 compute-0 sudo[222810]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:52 compute-0 podman[223102]: 2026-01-31 06:10:52.917353695 +0000 UTC m=+1.780063858 container init e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:10:52 compute-0 podman[223102]: 2026-01-31 06:10:52.92831307 +0000 UTC m=+1.791023123 container start e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:10:52 compute-0 stupefied_goldstine[224259]: 167 167
Jan 31 06:10:52 compute-0 systemd[1]: libpod-e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60.scope: Deactivated successfully.
Jan 31 06:10:53 compute-0 sudo[224434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afiddhwqimhpxrmtmsmkhoxbfcitgxjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839853.0714514-399-27801903034237/AnsiballZ_systemd_service.py'
Jan 31 06:10:53 compute-0 sudo[224434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:53 compute-0 podman[223102]: 2026-01-31 06:10:53.450158973 +0000 UTC m=+2.312869096 container attach e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 31 06:10:53 compute-0 podman[223102]: 2026-01-31 06:10:53.451302795 +0000 UTC m=+2.314012858 container died e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 06:10:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:53 compute-0 python3.9[224436]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:10:53 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 31 06:10:53 compute-0 iscsid[217604]: iscsid shutting down.
Jan 31 06:10:53 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 06:10:53 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 31 06:10:53 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 06:10:53 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 06:10:53 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 06:10:54 compute-0 sudo[224434]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:54 compute-0 sudo[224601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbmnxmbksdxhtwqtzeqcxwthgclksgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839854.1782897-407-97942235046413/AnsiballZ_systemd_service.py'
Jan 31 06:10:54 compute-0 sudo[224601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:54 compute-0 ceph-mon[75251]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:54 compute-0 python3.9[224603]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:10:54 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 06:10:54 compute-0 multipathd[221580]: exit (signal)
Jan 31 06:10:54 compute-0 multipathd[221580]: --------shut down-------
Jan 31 06:10:54 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 06:10:54 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 06:10:54 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 06:10:54 compute-0 multipathd[224609]: --------start up--------
Jan 31 06:10:54 compute-0 multipathd[224609]: read /etc/multipath.conf
Jan 31 06:10:54 compute-0 multipathd[224609]: path checkers start up
Jan 31 06:10:54 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 06:10:54 compute-0 sudo[224601]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-952f6965700df52ef3df2dc6cb7442818a2d9708550d1748f3668819ceae89bd-merged.mount: Deactivated successfully.
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:10:55 compute-0 python3.9[224767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:10:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:10:56 compute-0 ceph-mon[75251]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:56 compute-0 sudo[224921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfmaeiolhebqxbqcayaksyebrjnthrus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839856.2870743-425-256221581365426/AnsiballZ_file.py'
Jan 31 06:10:56 compute-0 sudo[224921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:56 compute-0 python3.9[224923]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:56 compute-0 sudo[224921]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:56 compute-0 podman[223102]: 2026-01-31 06:10:56.930466705 +0000 UTC m=+5.793176768 container remove e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 06:10:57 compute-0 systemd[1]: libpod-conmon-e67ad9f8764b082feb879f6cbe472ed28319776f9b6d81925fcbdbd739296a60.scope: Deactivated successfully.
Jan 31 06:10:57 compute-0 podman[224564]: 2026-01-31 06:10:57.052017898 +0000 UTC m=+2.594872663 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:10:57 compute-0 podman[224963]: 2026-01-31 06:10:57.061997875 +0000 UTC m=+0.023757142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:10:57 compute-0 podman[224963]: 2026-01-31 06:10:57.416569763 +0000 UTC m=+0.378329000 container create 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:10:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:57 compute-0 sudo[225115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhwugmvgjyktihlbavengzshkyqeshb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839857.2410693-436-170977129233609/AnsiballZ_systemd_service.py'
Jan 31 06:10:57 compute-0 sudo[225115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:57 compute-0 ceph-mon[75251]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:57 compute-0 systemd[1]: Started libpod-conmon-12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f.scope.
Jan 31 06:10:57 compute-0 podman[224398]: 2026-01-31 06:10:57.678000178 +0000 UTC m=+4.326201304 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 06:10:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17ec2541956bcae4be98f65cbe907d330c49e22e7fe5d275daa2193086f8c8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17ec2541956bcae4be98f65cbe907d330c49e22e7fe5d275daa2193086f8c8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17ec2541956bcae4be98f65cbe907d330c49e22e7fe5d275daa2193086f8c8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17ec2541956bcae4be98f65cbe907d330c49e22e7fe5d275daa2193086f8c8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:10:58 compute-0 python3.9[225117]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:10:58 compute-0 systemd[1]: Reloading.
Jan 31 06:10:58 compute-0 podman[224963]: 2026-01-31 06:10:58.07191843 +0000 UTC m=+1.033677667 container init 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 06:10:58 compute-0 podman[224963]: 2026-01-31 06:10:58.082488385 +0000 UTC m=+1.044247602 container start 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:10:58 compute-0 systemd-sysv-generator[225152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:10:58 compute-0 systemd-rc-local-generator[225149]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:10:58 compute-0 clever_austin[225120]: {
Jan 31 06:10:58 compute-0 clever_austin[225120]:     "0": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:         {
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "devices": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "/dev/loop3"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             ],
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_name": "ceph_lv0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_size": "21470642176",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "name": "ceph_lv0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "tags": {
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_name": "ceph",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.crush_device_class": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.encrypted": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.objectstore": "bluestore",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_id": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.vdo": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.with_tpm": "0"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             },
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "vg_name": "ceph_vg0"
Jan 31 06:10:58 compute-0 clever_austin[225120]:         }
Jan 31 06:10:58 compute-0 clever_austin[225120]:     ],
Jan 31 06:10:58 compute-0 clever_austin[225120]:     "1": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:         {
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "devices": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "/dev/loop4"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             ],
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_name": "ceph_lv1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_size": "21470642176",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "name": "ceph_lv1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "tags": {
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_name": "ceph",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.crush_device_class": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.encrypted": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.objectstore": "bluestore",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_id": "1",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.vdo": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.with_tpm": "0"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             },
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "vg_name": "ceph_vg1"
Jan 31 06:10:58 compute-0 clever_austin[225120]:         }
Jan 31 06:10:58 compute-0 clever_austin[225120]:     ],
Jan 31 06:10:58 compute-0 clever_austin[225120]:     "2": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:         {
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "devices": [
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "/dev/loop5"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             ],
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_name": "ceph_lv2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_size": "21470642176",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "name": "ceph_lv2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "tags": {
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.cluster_name": "ceph",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.crush_device_class": "",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.encrypted": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.objectstore": "bluestore",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osd_id": "2",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.vdo": "0",
Jan 31 06:10:58 compute-0 clever_austin[225120]:                 "ceph.with_tpm": "0"
Jan 31 06:10:58 compute-0 clever_austin[225120]:             },
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "type": "block",
Jan 31 06:10:58 compute-0 clever_austin[225120]:             "vg_name": "ceph_vg2"
Jan 31 06:10:58 compute-0 clever_austin[225120]:         }
Jan 31 06:10:58 compute-0 clever_austin[225120]:     ]
Jan 31 06:10:58 compute-0 clever_austin[225120]: }
Jan 31 06:10:58 compute-0 podman[224963]: 2026-01-31 06:10:58.416502139 +0000 UTC m=+1.378261386 container attach 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:10:58 compute-0 podman[224963]: 2026-01-31 06:10:58.418215327 +0000 UTC m=+1.379974574 container died 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:10:58 compute-0 systemd[1]: libpod-12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f.scope: Deactivated successfully.
Jan 31 06:10:58 compute-0 sudo[225115]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:59 compute-0 python3.9[225328]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:10:59 compute-0 network[225345]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 06:10:59 compute-0 network[225346]: 'network-scripts' will be removed from distribution in near future.
Jan 31 06:10:59 compute-0 network[225347]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 06:10:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:10:59 compute-0 ceph-mon[75251]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:10:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17ec2541956bcae4be98f65cbe907d330c49e22e7fe5d275daa2193086f8c8a-merged.mount: Deactivated successfully.
Jan 31 06:11:00 compute-0 ceph-mon[75251]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:00 compute-0 podman[224963]: 2026-01-31 06:11:00.685717269 +0000 UTC m=+3.647476496 container remove 12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 06:11:00 compute-0 systemd[1]: libpod-conmon-12f07173af3f379efe8a568f81c000d31e3e013ab2f80baab0cd2bd21b1e9f7f.scope: Deactivated successfully.
Jan 31 06:11:00 compute-0 sudo[223064]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:00 compute-0 sudo[225415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:11:00 compute-0 sudo[225415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:11:00 compute-0 sudo[225415]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:00 compute-0 sudo[225445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:11:00 compute-0 sudo[225445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:11:01 compute-0 podman[225501]: 2026-01-31 06:11:01.160340057 +0000 UTC m=+0.032123175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:11:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:01 compute-0 podman[225501]: 2026-01-31 06:11:01.523472513 +0000 UTC m=+0.395255681 container create 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 06:11:01 compute-0 systemd[1]: Started libpod-conmon-8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7.scope.
Jan 31 06:11:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:11:02 compute-0 sudo[225701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwfjzjvvtqtfyqcicxbksbpwstzowbsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839861.863182-455-9314815272501/AnsiballZ_systemd_service.py'
Jan 31 06:11:02 compute-0 sudo[225701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:02 compute-0 podman[225501]: 2026-01-31 06:11:02.430208356 +0000 UTC m=+1.301991564 container init 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:11:02 compute-0 podman[225501]: 2026-01-31 06:11:02.439641778 +0000 UTC m=+1.311424936 container start 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:11:02 compute-0 pensive_pike[225648]: 167 167
Jan 31 06:11:02 compute-0 systemd[1]: libpod-8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7.scope: Deactivated successfully.
Jan 31 06:11:02 compute-0 ceph-mon[75251]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:02 compute-0 python3.9[225703]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:02 compute-0 sudo[225701]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:02 compute-0 podman[225501]: 2026-01-31 06:11:02.574931563 +0000 UTC m=+1.446714731 container attach 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:11:02 compute-0 podman[225501]: 2026-01-31 06:11:02.575788707 +0000 UTC m=+1.447571885 container died 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:11:02 compute-0 sudo[225867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmidtjfdpvmvytubgdvngiptbmvaozz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839862.662438-455-33223622791725/AnsiballZ_systemd_service.py'
Jan 31 06:11:02 compute-0 sudo[225867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db6d2a45ea4c7569150299221a0269dd9259c5171ff475e0fd72d7048a86e74-merged.mount: Deactivated successfully.
Jan 31 06:11:03 compute-0 python3.9[225869]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:03 compute-0 sudo[225867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:03 compute-0 podman[225501]: 2026-01-31 06:11:03.446351134 +0000 UTC m=+2.318134262 container remove 8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:11:03 compute-0 systemd[1]: libpod-conmon-8073dd0982034da50338ee85ab46aa0a425786315ab8f9654931e6ee67bc9ed7.scope: Deactivated successfully.
Jan 31 06:11:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:11:03 compute-0 podman[225908]: 2026-01-31 06:11:03.618135234 +0000 UTC m=+0.070306397 container create dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:11:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:11:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:11:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.172s CPU time.
Jan 31 06:11:03 compute-0 systemd[1]: run-rff80439b2da8472a86e10c3b7b0f7569.service: Deactivated successfully.
Jan 31 06:11:03 compute-0 systemd[1]: Started libpod-conmon-dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4.scope.
Jan 31 06:11:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba21c2f249049237ef3275f1adf0519aaa7dbb37b672e85bc1172576f83d152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba21c2f249049237ef3275f1adf0519aaa7dbb37b672e85bc1172576f83d152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba21c2f249049237ef3275f1adf0519aaa7dbb37b672e85bc1172576f83d152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fba21c2f249049237ef3275f1adf0519aaa7dbb37b672e85bc1172576f83d152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:11:03 compute-0 podman[225908]: 2026-01-31 06:11:03.577133493 +0000 UTC m=+0.029304636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:11:03 compute-0 podman[225908]: 2026-01-31 06:11:03.706600636 +0000 UTC m=+0.158771819 container init dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:11:03 compute-0 podman[225908]: 2026-01-31 06:11:03.712409188 +0000 UTC m=+0.164580371 container start dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 31 06:11:03 compute-0 podman[225908]: 2026-01-31 06:11:03.74947867 +0000 UTC m=+0.201649853 container attach dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:11:03 compute-0 sudo[226051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmgapgduaqapzjzqkwqbbvfrptnhoto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839863.5550313-455-23294997900415/AnsiballZ_systemd_service.py'
Jan 31 06:11:03 compute-0 sudo[226051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:04 compute-0 python3.9[226053]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:04 compute-0 sudo[226051]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:04 compute-0 lvm[226173]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:11:04 compute-0 lvm[226170]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:11:04 compute-0 lvm[226173]: VG ceph_vg1 finished
Jan 31 06:11:04 compute-0 lvm[226170]: VG ceph_vg0 finished
Jan 31 06:11:04 compute-0 lvm[226196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:11:04 compute-0 lvm[226196]: VG ceph_vg2 finished
Jan 31 06:11:04 compute-0 optimistic_moore[225977]: {}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:04 compute-0 systemd[1]: libpod-dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4.scope: Deactivated successfully.
Jan 31 06:11:04 compute-0 podman[225908]: 2026-01-31 06:11:04.451199278 +0000 UTC m=+0.903370461 container died dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.456848) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864456877, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1646, "num_deletes": 250, "total_data_size": 2768224, "memory_usage": 2804360, "flush_reason": "Manual Compaction"}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864472480, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1561717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11895, "largest_seqno": 13540, "table_properties": {"data_size": 1556250, "index_size": 2671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13749, "raw_average_key_size": 20, "raw_value_size": 1544213, "raw_average_value_size": 2260, "num_data_blocks": 123, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769839678, "oldest_key_time": 1769839678, "file_creation_time": 1769839864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15703 microseconds, and 3645 cpu microseconds.
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.472541) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1561717 bytes OK
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.472569) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.475718) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.475750) EVENT_LOG_v1 {"time_micros": 1769839864475740, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.475777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2761156, prev total WAL file size 2761156, number of live WAL files 2.
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.476627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323534' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1525KB)], [29(8103KB)]
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864476690, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9859907, "oldest_snapshot_seqno": -1}
Jan 31 06:11:04 compute-0 sudo[226295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmlctglwtnzqwnoxhluqgnsqmgkfdrhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839864.2652032-455-246752935359080/AnsiballZ_systemd_service.py'
Jan 31 06:11:04 compute-0 sudo[226295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4033 keys, 7689987 bytes, temperature: kUnknown
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864546416, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7689987, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7661046, "index_size": 17759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96159, "raw_average_key_size": 23, "raw_value_size": 7586372, "raw_average_value_size": 1881, "num_data_blocks": 771, "num_entries": 4033, "num_filter_entries": 4033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769839864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fba21c2f249049237ef3275f1adf0519aaa7dbb37b672e85bc1172576f83d152-merged.mount: Deactivated successfully.
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.546627) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7689987 bytes
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.550673) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.3 rd, 110.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 4457, records dropped: 424 output_compression: NoCompression
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.550691) EVENT_LOG_v1 {"time_micros": 1769839864550683, "job": 12, "event": "compaction_finished", "compaction_time_micros": 69790, "compaction_time_cpu_micros": 13154, "output_level": 6, "num_output_files": 1, "total_output_size": 7689987, "num_input_records": 4457, "num_output_records": 4033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864550884, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769839864551502, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.476501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.551525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.551529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.551530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.551532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:11:04.551533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:11:04 compute-0 podman[225908]: 2026-01-31 06:11:04.57238564 +0000 UTC m=+1.024556803 container remove dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:11:04 compute-0 systemd[1]: libpod-conmon-dc491bb8d7729606e29c66c1e4eba2ec3708cc588f2f9ceaf6f2786fb7edf8a4.scope: Deactivated successfully.
Jan 31 06:11:04 compute-0 sudo[225445]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:11:04 compute-0 ceph-mon[75251]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:11:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:11:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:11:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:11:04 compute-0 sudo[226298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:11:04 compute-0 sudo[226298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:11:04 compute-0 sudo[226298]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:04 compute-0 python3.9[226297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:11:05 compute-0 sudo[226295]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:11:05 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:11:06 compute-0 sudo[226473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mubmhdiuubltyahlrwdgrrzmkuageewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839865.9641063-455-236796447052303/AnsiballZ_systemd_service.py'
Jan 31 06:11:06 compute-0 sudo[226473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:06 compute-0 python3.9[226475]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:06 compute-0 sudo[226473]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:06 compute-0 sudo[226626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smekjmzsgqknwqfdlcsddiyowrrwspka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839866.6830516-455-43760713780062/AnsiballZ_systemd_service.py'
Jan 31 06:11:06 compute-0 sudo[226626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:07 compute-0 ceph-mon[75251]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:11:07 compute-0 python3.9[226628]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:07 compute-0 sudo[226626]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 06:11:07 compute-0 sudo[226779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amgnpgoavfdahtdhwwmfwemafrgdezyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839867.4185667-455-77532129508094/AnsiballZ_systemd_service.py'
Jan 31 06:11:07 compute-0 sudo[226779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:08 compute-0 python3.9[226781]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:08 compute-0 sudo[226779]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:08 compute-0 sudo[226932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddaxfhhrtfzjzdkbtidhkphttvopacts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839868.2700143-455-57575328632453/AnsiballZ_systemd_service.py'
Jan 31 06:11:08 compute-0 sudo[226932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:08 compute-0 ceph-mon[75251]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 06:11:08 compute-0 python3.9[226934]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:11:08 compute-0 sudo[226932]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:09 compute-0 sudo[227085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudxvysuvwrxcakiknbmdocwqlqdibzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839869.2215312-514-192931988612248/AnsiballZ_file.py'
Jan 31 06:11:09 compute-0 sudo[227085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:09 compute-0 python3.9[227087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:09 compute-0 sudo[227085]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:09 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 06:11:10 compute-0 ceph-mon[75251]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:10 compute-0 sudo[227238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhdhtactxjfofmdxmnpnxddrzwmkumf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839869.8017552-514-19986702876621/AnsiballZ_file.py'
Jan 31 06:11:10 compute-0 sudo[227238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:10 compute-0 python3.9[227240]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:10 compute-0 sudo[227238]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:10 compute-0 sudo[227390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxzqdtuxvujnksifyhifbqvfptiednwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839870.448635-514-38007484113985/AnsiballZ_file.py'
Jan 31 06:11:10 compute-0 sudo[227390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:10 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 06:11:11 compute-0 python3.9[227392]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:11 compute-0 sudo[227390]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:11 compute-0 sudo[227543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effqozkykhwfeblahubqndbmrfjgfkcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839871.2907271-514-29862938697139/AnsiballZ_file.py'
Jan 31 06:11:11 compute-0 sudo[227543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:11 compute-0 python3.9[227545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:11 compute-0 sudo[227543]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:12 compute-0 sudo[227695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxhovihtedcgqcmhjequczdusoyepla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839872.0126507-514-8709740700101/AnsiballZ_file.py'
Jan 31 06:11:12 compute-0 sudo[227695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:12 compute-0 ceph-mon[75251]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:12 compute-0 python3.9[227697]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:12 compute-0 sudo[227695]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:12 compute-0 sudo[227847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imnccstnjtxmhjibvzolaxrelviphqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839872.5623646-514-188665263468118/AnsiballZ_file.py'
Jan 31 06:11:12 compute-0 sudo[227847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:13 compute-0 python3.9[227849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:13 compute-0 sudo[227847]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:13 compute-0 sudo[227999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emamgdlyjkrwxwzunijohljoeixeevhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839873.3052444-514-70973836370595/AnsiballZ_file.py'
Jan 31 06:11:13 compute-0 sudo[227999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:13 compute-0 python3.9[228001]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:13 compute-0 sudo[227999]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:14 compute-0 ceph-mon[75251]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 06:11:14 compute-0 sudo[228151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnltgepypopfnifpxejyjzamsmsaamym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839873.8461936-514-194386612434005/AnsiballZ_file.py'
Jan 31 06:11:14 compute-0 sudo[228151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:14 compute-0 python3.9[228153]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:14 compute-0 sudo[228151]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:14 compute-0 sudo[228303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqatyoianjhnxndetvhbbaslyzogoscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839874.4360712-571-101297912897753/AnsiballZ_file.py'
Jan 31 06:11:14 compute-0 sudo[228303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:14 compute-0 python3.9[228305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:14 compute-0 sudo[228303]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:15 compute-0 sudo[228455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffttwhrnwpysqyvrufbouzxvsjflxbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839874.9740841-571-162440004768245/AnsiballZ_file.py'
Jan 31 06:11:15 compute-0 sudo[228455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:11:15 compute-0 python3.9[228457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:15 compute-0 sudo[228455]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:15 compute-0 sudo[228607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfobxkxuclmidsmzyhybealaosctbvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839875.6762872-571-279069952014610/AnsiballZ_file.py'
Jan 31 06:11:15 compute-0 sudo[228607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:16 compute-0 python3.9[228609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:16 compute-0 sudo[228607]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:16 compute-0 sudo[228759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcyzihgyaczlznbogwkrfafjmwtfqumc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839876.152686-571-253363598171297/AnsiballZ_file.py'
Jan 31 06:11:16 compute-0 sudo[228759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:16 compute-0 python3.9[228761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:16 compute-0 sudo[228759]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:16 compute-0 sudo[228911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmbkokhzolygifdxztnpcsmqdhvjoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839876.6493218-571-223059428166509/AnsiballZ_file.py'
Jan 31 06:11:16 compute-0 sudo[228911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:16 compute-0 ceph-mon[75251]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:11:17 compute-0 python3.9[228913]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:17 compute-0 sudo[228911]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:17 compute-0 sudo[229063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xolbyqnvgefdthiheuglpevxoogeendn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839877.241709-571-262991082736014/AnsiballZ_file.py'
Jan 31 06:11:17 compute-0 sudo[229063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:11:17 compute-0 python3.9[229065]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:17 compute-0 sudo[229063]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:17 compute-0 sudo[229215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lraoahezczlzxyvlhdzmycapxniszbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839877.7908926-571-89103291715130/AnsiballZ_file.py'
Jan 31 06:11:18 compute-0 sudo[229215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:18 compute-0 ceph-mon[75251]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:11:18 compute-0 python3.9[229217]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:18 compute-0 sudo[229215]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:18 compute-0 sudo[229367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndluhflxuidgduymkfaeognncodhzqxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839878.2840872-571-48199076323151/AnsiballZ_file.py'
Jan 31 06:11:18 compute-0 sudo[229367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:18 compute-0 python3.9[229369]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:18 compute-0 sudo[229367]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:19 compute-0 sudo[229519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzosgopajtxvvszmpiktecmcgkzozci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839878.9122658-629-135915959076276/AnsiballZ_command.py'
Jan 31 06:11:19 compute-0 sudo[229519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:19 compute-0 python3.9[229521]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:19 compute-0 sudo[229519]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 06:11:20 compute-0 python3.9[229673]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 06:11:20 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 06:11:20 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 06:11:20 compute-0 sudo[229825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywlyqcenskaciahmhqmszombbufpzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839880.326332-647-279053938274744/AnsiballZ_systemd_service.py'
Jan 31 06:11:20 compute-0 sudo[229825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:20 compute-0 python3.9[229827]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:11:20 compute-0 systemd[1]: Reloading.
Jan 31 06:11:20 compute-0 systemd-rc-local-generator[229854]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:11:20 compute-0 systemd-sysv-generator[229857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:11:21 compute-0 ceph-mon[75251]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 06:11:21 compute-0 sudo[229825]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:21 compute-0 sudo[230011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opphuuhmsplhnhhobgmdqsyisxpefgps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839881.2979438-655-182983308636498/AnsiballZ_command.py'
Jan 31 06:11:21 compute-0 sudo[230011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:21 compute-0 python3.9[230013]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:21 compute-0 sudo[230011]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:22 compute-0 sudo[230164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uznyedsaqzpkepuykyxcioaapkhfsjuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839881.7993808-655-67221686312642/AnsiballZ_command.py'
Jan 31 06:11:22 compute-0 sudo[230164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:22 compute-0 ceph-mon[75251]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:22 compute-0 python3.9[230166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:22 compute-0 sudo[230164]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:22 compute-0 sudo[230317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxblkbgaktmiinimgjytjcciegcbryov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839882.391313-655-258144768369361/AnsiballZ_command.py'
Jan 31 06:11:22 compute-0 sudo[230317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:22 compute-0 python3.9[230319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:22 compute-0 sudo[230317]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:23 compute-0 sudo[230470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqiehtbquttntflexoorrzvgzvunftcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839882.9815722-655-171604321939932/AnsiballZ_command.py'
Jan 31 06:11:23 compute-0 sudo[230470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:23 compute-0 python3.9[230472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:23 compute-0 sudo[230470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:23 compute-0 sudo[230623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qflbuytzgqlrwpfcajdufhylwhythmkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839883.5401974-655-216634829900699/AnsiballZ_command.py'
Jan 31 06:11:23 compute-0 sudo[230623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 compute-0 python3.9[230625]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:24 compute-0 sudo[230623]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 compute-0 ceph-mon[75251]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:24 compute-0 sudo[230776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgglnflxditchpmeswpczlvswplmzbjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839884.2155714-655-220569700692034/AnsiballZ_command.py'
Jan 31 06:11:24 compute-0 sudo[230776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:24 compute-0 python3.9[230778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:24 compute-0 sudo[230776]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:24 compute-0 sudo[230929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbwbmvoditczjbygcmwwrlqcpnqyisy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839884.744472-655-201273268654055/AnsiballZ_command.py'
Jan 31 06:11:24 compute-0 sudo[230929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:25 compute-0 python3.9[230931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:25 compute-0 sudo[230929]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:25 compute-0 sudo[231082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lupmovfmcpzuvfpatwkjmrwxumzybmoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839885.315366-655-208374945887957/AnsiballZ_command.py'
Jan 31 06:11:25 compute-0 sudo[231082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:25 compute-0 python3.9[231084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:11:25 compute-0 sudo[231082]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:26 compute-0 ceph-mon[75251]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:27 compute-0 sudo[231256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfsmrpowzsechluqomxpxhickpotqcnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839887.6770504-734-107914458760281/AnsiballZ_file.py'
Jan 31 06:11:27 compute-0 sudo[231256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:27 compute-0 podman[231209]: 2026-01-31 06:11:27.955090624 +0000 UTC m=+0.078198157 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:11:27 compute-0 podman[231210]: 2026-01-31 06:11:27.961712368 +0000 UTC m=+0.084498552 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 06:11:28 compute-0 python3.9[231272]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:28 compute-0 ceph-mon[75251]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:28 compute-0 sudo[231256]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:28 compute-0 sudo[231433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnnfczpsvagmkgdmlxjzyjaaggioduuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839888.3094254-734-189168760435993/AnsiballZ_file.py'
Jan 31 06:11:28 compute-0 sudo[231433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:28 compute-0 python3.9[231435]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:28 compute-0 sudo[231433]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:29 compute-0 sudo[231585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuxwhnenhhtwezlmvxbimnhuidblsjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839888.869774-734-269712627225913/AnsiballZ_file.py'
Jan 31 06:11:29 compute-0 sudo[231585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:29 compute-0 python3.9[231587]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:29 compute-0 sudo[231585]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:30 compute-0 sudo[231737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhlamygncqefzznrvbuucxutwccdjcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839889.8217664-756-240461910290896/AnsiballZ_file.py'
Jan 31 06:11:30 compute-0 sudo[231737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:30 compute-0 python3.9[231739]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:30 compute-0 sudo[231737]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:30 compute-0 sudo[231889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkvjzusdathgpnbsszjbdxxrfsrzkmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839890.4308753-756-77497684083819/AnsiballZ_file.py'
Jan 31 06:11:30 compute-0 sudo[231889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:30 compute-0 ceph-mon[75251]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:30 compute-0 python3.9[231891]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:30 compute-0 sudo[231889]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:31 compute-0 sudo[232041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spidifzfsfvevfrfuxrfagyrxhhymhdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839890.9844162-756-186835412931342/AnsiballZ_file.py'
Jan 31 06:11:31 compute-0 sudo[232041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:31 compute-0 python3.9[232043]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:31 compute-0 sudo[232041]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:31 compute-0 sudo[232193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aydwqvdpvilwzqiiyyayhngpomjajfuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839891.5090468-756-21358890780330/AnsiballZ_file.py'
Jan 31 06:11:31 compute-0 sudo[232193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:31 compute-0 python3.9[232195]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:31 compute-0 sudo[232193]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:32 compute-0 sudo[232345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxxbjhqttxbteecbnrbiokqpbehdsyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839892.04463-756-132109390578984/AnsiballZ_file.py'
Jan 31 06:11:32 compute-0 sudo[232345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:32 compute-0 python3.9[232347]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:32 compute-0 sudo[232345]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:32 compute-0 ceph-mon[75251]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:32 compute-0 sudo[232497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygnlfovhdwxnugvhsnzuqnblkyktgdep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839892.6676984-756-232632974493753/AnsiballZ_file.py'
Jan 31 06:11:32 compute-0 sudo[232497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:33 compute-0 python3.9[232499]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:33 compute-0 sudo[232497]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:33 compute-0 sudo[232649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-splubysxmasszinkcwhupcsuekfehmhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839893.1882718-756-158771293308150/AnsiballZ_file.py'
Jan 31 06:11:33 compute-0 sudo[232649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:33 compute-0 python3.9[232651]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:33 compute-0 sudo[232649]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:34 compute-0 ceph-mon[75251]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:35 compute-0 ceph-mon[75251]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:38 compute-0 ceph-mon[75251]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:39 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:39 compute-0 sudo[232801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcgrflflsnxbzawfrfbwgxruajcexvhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839899.465405-945-78243423199242/AnsiballZ_getent.py'
Jan 31 06:11:39 compute-0 sudo[232801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:40 compute-0 python3.9[232803]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 06:11:40 compute-0 sudo[232801]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:40 compute-0 sudo[232954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtrrjngrloufddwllwrjpinyzmobytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839900.2031977-953-276268655326933/AnsiballZ_group.py'
Jan 31 06:11:40 compute-0 sudo[232954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:40 compute-0 ceph-mon[75251]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:40 compute-0 python3.9[232956]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:11:40 compute-0 groupadd[232957]: group added to /etc/group: name=nova, GID=42436
Jan 31 06:11:40 compute-0 groupadd[232957]: group added to /etc/gshadow: name=nova
Jan 31 06:11:40 compute-0 groupadd[232957]: new group: name=nova, GID=42436
Jan 31 06:11:40 compute-0 sudo[232954]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:41 compute-0 sudo[233112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaettflqnxmnnawjkjlwcqfmnskvhxme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839901.093694-961-4452728102980/AnsiballZ_user.py'
Jan 31 06:11:41 compute-0 sudo[233112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:41 compute-0 python3.9[233114]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:11:41 compute-0 useradd[233116]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 06:11:41 compute-0 useradd[233116]: add 'nova' to group 'libvirt'
Jan 31 06:11:41 compute-0 useradd[233116]: add 'nova' to shadow group 'libvirt'
Jan 31 06:11:41 compute-0 sudo[233112]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:42 compute-0 ceph-mon[75251]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:42 compute-0 sshd-session[233147]: Accepted publickey for zuul from 192.168.122.30 port 55076 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:11:42 compute-0 systemd-logind[797]: New session 50 of user zuul.
Jan 31 06:11:42 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 31 06:11:42 compute-0 sshd-session[233147]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:11:42 compute-0 sshd-session[233150]: Received disconnect from 192.168.122.30 port 55076:11: disconnected by user
Jan 31 06:11:42 compute-0 sshd-session[233150]: Disconnected from user zuul 192.168.122.30 port 55076
Jan 31 06:11:42 compute-0 sshd-session[233147]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:11:42 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 06:11:42 compute-0 systemd-logind[797]: Session 50 logged out. Waiting for processes to exit.
Jan 31 06:11:42 compute-0 systemd-logind[797]: Removed session 50.
Jan 31 06:11:43 compute-0 python3.9[233300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:43 compute-0 python3.9[233421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839903.0520737-986-184222179273213/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:44 compute-0 python3.9[233571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:11:44
Jan 31 06:11:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:11:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:11:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr']
Jan 31 06:11:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:11:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:44 compute-0 ceph-mon[75251]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:44 compute-0 python3.9[233647]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:45 compute-0 python3.9[233797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:11:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:45 compute-0 python3.9[233918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839904.8191068-986-67424477015162/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:46 compute-0 python3.9[234068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:46 compute-0 python3.9[234189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839905.7198143-986-193059477512022/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:46 compute-0 ceph-mon[75251]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:46 compute-0 python3.9[234339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:47 compute-0 python3.9[234460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839906.610597-986-51312805656292/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:47 compute-0 python3.9[234610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:48 compute-0 ceph-mon[75251]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:48 compute-0 python3.9[234731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839907.5638583-986-181277574111977/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:48 compute-0 sudo[234881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvfyxxmcjrvxqxuasejlqmmfxfohffm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839908.527972-1069-152745736442606/AnsiballZ_file.py'
Jan 31 06:11:48 compute-0 sudo[234881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:48 compute-0 python3.9[234883]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:48 compute-0 sudo[234881]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:49 compute-0 sudo[235033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bokihyoobgnaaerkpnlmyppuczewviat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839909.027109-1077-255754392240918/AnsiballZ_copy.py'
Jan 31 06:11:49 compute-0 sudo[235033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:49 compute-0 python3.9[235035]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:49 compute-0 sudo[235033]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:49 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:49 compute-0 sudo[235185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uufxexusjoqnjfkuqyydyfsvupsxtreg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839909.5568392-1085-201239309550874/AnsiballZ_stat.py'
Jan 31 06:11:49 compute-0 sudo[235185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:49 compute-0 python3.9[235187]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:11:49 compute-0 sudo[235185]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:11:50.205 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:11:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:11:50.205 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:11:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:11:50.205 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:11:50 compute-0 sudo[235337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhiutihfyjceutjjuhzjzfqimwanhui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839910.0600305-1093-121703777265654/AnsiballZ_stat.py'
Jan 31 06:11:50 compute-0 sudo[235337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:50 compute-0 python3.9[235339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:50 compute-0 sudo[235337]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:50 compute-0 ceph-mon[75251]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:50 compute-0 sudo[235460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwlqxzxbyeubcuyfgyyhcywqyerjljlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839910.0600305-1093-121703777265654/AnsiballZ_copy.py'
Jan 31 06:11:50 compute-0 sudo[235460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:50 compute-0 python3.9[235462]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769839910.0600305-1093-121703777265654/.source _original_basename=.va9rvwtx follow=False checksum=a1a1be98fbe02bd44bd27b5e000f780486e8ce97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 06:11:50 compute-0 sudo[235460]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:51 compute-0 python3.9[235614]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:11:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:51 compute-0 python3.9[235766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:52 compute-0 python3.9[235887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839911.612011-1119-56812503546965/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:52 compute-0 ceph-mon[75251]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:53 compute-0 python3.9[236037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:11:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:53 compute-0 python3.9[236158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769839912.6177905-1134-154071826511304/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:11:54 compute-0 sudo[236308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qokbypybngxtdnzzhmbxkdfxystfyrps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839913.847228-1151-29468655390364/AnsiballZ_container_config_data.py'
Jan 31 06:11:54 compute-0 sudo[236308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:54 compute-0 python3.9[236310]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 06:11:54 compute-0 sudo[236308]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:54 compute-0 ceph-mon[75251]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:55 compute-0 sudo[236460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfyjfdvkvxhnmpdwqukrqagpliapcyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839914.8364367-1162-248382509747722/AnsiballZ_container_config_hash.py'
Jan 31 06:11:55 compute-0 sudo[236460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:55 compute-0 python3.9[236462]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:11:55 compute-0 sudo[236460]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:11:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:11:55 compute-0 ceph-mon[75251]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:56 compute-0 sudo[236612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvbsyxtdpibbxqdwphdjaxsywxigqavi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839915.8186917-1172-248595858638246/AnsiballZ_edpm_container_manage.py'
Jan 31 06:11:56 compute-0 sudo[236612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:56 compute-0 python3[236614]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:11:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:58 compute-0 podman[236649]: 2026-01-31 06:11:58.14891538 +0000 UTC m=+0.071510981 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:11:58 compute-0 podman[236650]: 2026-01-31 06:11:58.15786692 +0000 UTC m=+0.078796794 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 06:11:58 compute-0 ceph-mon[75251]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:11:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:11:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:01 compute-0 ceph-mon[75251]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:02 compute-0 ceph-mon[75251]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:04 compute-0 sudo[236729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:12:04 compute-0 sudo[236729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:04 compute-0 sudo[236729]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:04 compute-0 sudo[236754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:12:04 compute-0 sudo[236754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:05 compute-0 ceph-mon[75251]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:06 compute-0 ceph-mon[75251]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:06 compute-0 sudo[236754]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:12:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:12:06 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:12:06 compute-0 sudo[236809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:12:06 compute-0 sudo[236809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:06 compute-0 sudo[236809]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:06 compute-0 sudo[236834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:12:06 compute-0 sudo[236834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:06 compute-0 podman[236626]: 2026-01-31 06:12:06.801670476 +0000 UTC m=+10.246866889 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 06:12:07 compute-0 podman[236880]: 2026-01-31 06:12:07.006822865 +0000 UTC m=+0.111251427 container create 8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:07 compute-0 podman[236880]: 2026-01-31 06:12:06.918846317 +0000 UTC m=+0.023274879 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 06:12:07 compute-0 python3[236614]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.050526081 +0000 UTC m=+0.060450273 container create faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:12:07 compute-0 systemd[1]: Started libpod-conmon-faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88.scope.
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.018090379 +0000 UTC m=+0.028014591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.135245859 +0000 UTC m=+0.145170081 container init faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.141574635 +0000 UTC m=+0.151498837 container start faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:12:07 compute-0 agitated_swartz[236935]: 167 167
Jan 31 06:12:07 compute-0 systemd[1]: libpod-faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88.scope: Deactivated successfully.
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.149877526 +0000 UTC m=+0.159801748 container attach faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.152419857 +0000 UTC m=+0.162344049 container died faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:12:07 compute-0 sudo[236612]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be7f3a19ad99beb9c6bde1d5ee6002ab21e26ecbdb4eb35ea31f5f810573d25-merged.mount: Deactivated successfully.
Jan 31 06:12:07 compute-0 podman[236906]: 2026-01-31 06:12:07.216640504 +0000 UTC m=+0.226564696 container remove faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 06:12:07 compute-0 systemd[1]: libpod-conmon-faeed61135dc7d29489344405b718b14fd018a6b81aa9981ac85c0640f2fbd88.scope: Deactivated successfully.
Jan 31 06:12:07 compute-0 podman[237016]: 2026-01-31 06:12:07.340731097 +0000 UTC m=+0.038090651 container create b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:07 compute-0 systemd[1]: Started libpod-conmon-b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815.scope.
Jan 31 06:12:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:07 compute-0 podman[237016]: 2026-01-31 06:12:07.408340909 +0000 UTC m=+0.105700483 container init b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:12:07 compute-0 podman[237016]: 2026-01-31 06:12:07.413564104 +0000 UTC m=+0.110923658 container start b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:07 compute-0 podman[237016]: 2026-01-31 06:12:07.322910381 +0000 UTC m=+0.020269965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:07 compute-0 podman[237016]: 2026-01-31 06:12:07.419761227 +0000 UTC m=+0.117120781 container attach b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:12:07 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:12:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:07 compute-0 sudo[237139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmfblvlwqaceleywhffxpfryicdyapeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839927.303738-1180-67548497372933/AnsiballZ_stat.py'
Jan 31 06:12:07 compute-0 sudo[237139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:07 compute-0 python3.9[237141]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:12:07 compute-0 sudo[237139]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:07 compute-0 festive_swartz[237061]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:12:07 compute-0 festive_swartz[237061]: --> All data devices are unavailable
Jan 31 06:12:07 compute-0 systemd[1]: libpod-b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815.scope: Deactivated successfully.
Jan 31 06:12:07 compute-0 podman[237183]: 2026-01-31 06:12:07.881028563 +0000 UTC m=+0.020793969 container died b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e6224607cfb5ea09eb92094c77380fe94e577c572a97a169db18d97420e11c9-merged.mount: Deactivated successfully.
Jan 31 06:12:08 compute-0 podman[237183]: 2026-01-31 06:12:08.285568711 +0000 UTC m=+0.425334127 container remove b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swartz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:12:08 compute-0 systemd[1]: libpod-conmon-b362cb47cb0c76d84e5b52cfea1e2afa35074208fccbdac3cad4e56ba23d8815.scope: Deactivated successfully.
Jan 31 06:12:08 compute-0 sudo[237323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdnmusojhjkhbhskwvrvhsstodablpmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839928.1284106-1192-185341559926999/AnsiballZ_container_config_data.py'
Jan 31 06:12:08 compute-0 sudo[237323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:08 compute-0 sudo[236834]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:08 compute-0 sudo[237326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:12:08 compute-0 sudo[237326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:08 compute-0 sudo[237326]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:08 compute-0 sudo[237351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:12:08 compute-0 sudo[237351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:08 compute-0 ceph-mon[75251]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:08 compute-0 python3.9[237325]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 06:12:08 compute-0 sudo[237323]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.654252992 +0000 UTC m=+0.036499587 container create 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:12:08 compute-0 systemd[1]: Started libpod-conmon-1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217.scope.
Jan 31 06:12:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.704757197 +0000 UTC m=+0.087003832 container init 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.710398334 +0000 UTC m=+0.092644939 container start 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.714274292 +0000 UTC m=+0.096520917 container attach 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:12:08 compute-0 hungry_rubin[237429]: 167 167
Jan 31 06:12:08 compute-0 systemd[1]: libpod-1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217.scope: Deactivated successfully.
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.71529582 +0000 UTC m=+0.097542425 container died 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.63552268 +0000 UTC m=+0.017769305 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f42d649f0a52b454ad3fdba4fd1ec0cedad38f9ef35eb5bffa1dd45cf6be704-merged.mount: Deactivated successfully.
Jan 31 06:12:08 compute-0 podman[237413]: 2026-01-31 06:12:08.760353544 +0000 UTC m=+0.142600149 container remove 1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:12:08 compute-0 systemd[1]: libpod-conmon-1c5e7a594d6eb7aef0a8f90eafeefc5312008f7159a0b7c7bb841cc349fa2217.scope: Deactivated successfully.
Jan 31 06:12:08 compute-0 podman[237506]: 2026-01-31 06:12:08.88777242 +0000 UTC m=+0.033618826 container create 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:12:08 compute-0 systemd[1]: Started libpod-conmon-5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8.scope.
Jan 31 06:12:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8885d159a15f3f3c96646a3d7326348c36341f78b6567175f7d4a74b635e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8885d159a15f3f3c96646a3d7326348c36341f78b6567175f7d4a74b635e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8885d159a15f3f3c96646a3d7326348c36341f78b6567175f7d4a74b635e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8885d159a15f3f3c96646a3d7326348c36341f78b6567175f7d4a74b635e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:08 compute-0 podman[237506]: 2026-01-31 06:12:08.961198374 +0000 UTC m=+0.107044810 container init 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:12:08 compute-0 podman[237506]: 2026-01-31 06:12:08.966176522 +0000 UTC m=+0.112022918 container start 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 06:12:08 compute-0 podman[237506]: 2026-01-31 06:12:08.873079471 +0000 UTC m=+0.018925897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:08 compute-0 podman[237506]: 2026-01-31 06:12:08.971970123 +0000 UTC m=+0.117816549 container attach 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Jan 31 06:12:09 compute-0 sudo[237601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olgadktmrxnwciohosxvhlknfaulezde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839928.7922504-1203-131279185193971/AnsiballZ_container_config_hash.py'
Jan 31 06:12:09 compute-0 sudo[237601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:09 compute-0 python3.9[237603]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]: {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     "0": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "devices": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "/dev/loop3"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             ],
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_name": "ceph_lv0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_size": "21470642176",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "name": "ceph_lv0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "tags": {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_name": "ceph",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.crush_device_class": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.encrypted": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.objectstore": "bluestore",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_id": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.vdo": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.with_tpm": "0"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             },
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "vg_name": "ceph_vg0"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         }
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     ],
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     "1": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "devices": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "/dev/loop4"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             ],
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_name": "ceph_lv1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_size": "21470642176",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "name": "ceph_lv1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "tags": {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_name": "ceph",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.crush_device_class": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.encrypted": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.objectstore": "bluestore",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_id": "1",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.vdo": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.with_tpm": "0"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             },
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "vg_name": "ceph_vg1"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         }
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     ],
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     "2": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "devices": [
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "/dev/loop5"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             ],
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_name": "ceph_lv2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_size": "21470642176",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "name": "ceph_lv2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "tags": {
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.cluster_name": "ceph",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.crush_device_class": "",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.encrypted": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.objectstore": "bluestore",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osd_id": "2",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.vdo": "0",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:                 "ceph.with_tpm": "0"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             },
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "type": "block",
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:             "vg_name": "ceph_vg2"
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:         }
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]:     ]
Jan 31 06:12:09 compute-0 suspicious_bassi[237549]: }
Jan 31 06:12:09 compute-0 sudo[237601]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:09 compute-0 systemd[1]: libpod-5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8.scope: Deactivated successfully.
Jan 31 06:12:09 compute-0 podman[237506]: 2026-01-31 06:12:09.240320871 +0000 UTC m=+0.386167337 container died 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f8885d159a15f3f3c96646a3d7326348c36341f78b6567175f7d4a74b635e9-merged.mount: Deactivated successfully.
Jan 31 06:12:09 compute-0 podman[237506]: 2026-01-31 06:12:09.305103654 +0000 UTC m=+0.450950060 container remove 5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_bassi, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:12:09 compute-0 systemd[1]: libpod-conmon-5e6c627ef75c837b5fbd301c51e13d9bdde444e25e5bff9b6129ba5c2ae433b8.scope: Deactivated successfully.
Jan 31 06:12:09 compute-0 sudo[237351]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:09 compute-0 sudo[237643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:12:09 compute-0 sudo[237643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:09 compute-0 sudo[237643]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:09 compute-0 sudo[237668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:12:09 compute-0 sudo[237668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.663166209 +0000 UTC m=+0.032702591 container create 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:09 compute-0 systemd[1]: Started libpod-conmon-276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93.scope.
Jan 31 06:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.738172696 +0000 UTC m=+0.107709088 container init 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.743755061 +0000 UTC m=+0.113291443 container start 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.647505683 +0000 UTC m=+0.017042085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:09 compute-0 practical_tharp[237821]: 167 167
Jan 31 06:12:09 compute-0 systemd[1]: libpod-276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93.scope: Deactivated successfully.
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.751201529 +0000 UTC m=+0.120737931 container attach 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.751550498 +0000 UTC m=+0.121086870 container died 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 06:12:09 compute-0 sudo[237851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijiztvysvfreevfiwxksvhqmknaxreoe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769839929.4928403-1213-36571438252267/AnsiballZ_edpm_container_manage.py'
Jan 31 06:12:09 compute-0 sudo[237851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b28d2567675e05267a8e1a3275eb69a3286cabe667a15ffc3da77399d0c9dbb-merged.mount: Deactivated successfully.
Jan 31 06:12:09 compute-0 podman[237780]: 2026-01-31 06:12:09.794270116 +0000 UTC m=+0.163806488 container remove 276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:12:09 compute-0 systemd[1]: libpod-conmon-276c36221692d6e8d39dd2ac317bcc2424185a4402193ac562a4badc1db1fb93.scope: Deactivated successfully.
Jan 31 06:12:09 compute-0 podman[237874]: 2026-01-31 06:12:09.955667538 +0000 UTC m=+0.071440659 container create 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 06:12:09 compute-0 podman[237874]: 2026-01-31 06:12:09.912099165 +0000 UTC m=+0.027872316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:12:10 compute-0 systemd[1]: Started libpod-conmon-252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3.scope.
Jan 31 06:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167a5a7a67a86f930ec01450f24d7f1b0af69dd119e688fa04bc2cd86accf34b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167a5a7a67a86f930ec01450f24d7f1b0af69dd119e688fa04bc2cd86accf34b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167a5a7a67a86f930ec01450f24d7f1b0af69dd119e688fa04bc2cd86accf34b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167a5a7a67a86f930ec01450f24d7f1b0af69dd119e688fa04bc2cd86accf34b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:10 compute-0 python3[237855]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 06:12:10 compute-0 podman[237874]: 2026-01-31 06:12:10.03693795 +0000 UTC m=+0.152711091 container init 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:12:10 compute-0 podman[237874]: 2026-01-31 06:12:10.044478549 +0000 UTC m=+0.160251680 container start 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:12:10 compute-0 podman[237874]: 2026-01-31 06:12:10.048893432 +0000 UTC m=+0.164666583 container attach 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:12:10 compute-0 podman[237931]: 2026-01-31 06:12:10.244902197 +0000 UTC m=+0.041493726 container create c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.schema-version=1.0)
Jan 31 06:12:10 compute-0 podman[237931]: 2026-01-31 06:12:10.221242868 +0000 UTC m=+0.017834397 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 06:12:10 compute-0 python3[237855]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 06:12:10 compute-0 sudo[237851]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:10 compute-0 lvm[238141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:12:10 compute-0 lvm[238141]: VG ceph_vg1 finished
Jan 31 06:12:10 compute-0 lvm[238140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:12:10 compute-0 lvm[238140]: VG ceph_vg0 finished
Jan 31 06:12:10 compute-0 ceph-mon[75251]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:10 compute-0 lvm[238150]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:12:10 compute-0 lvm[238150]: VG ceph_vg2 finished
Jan 31 06:12:10 compute-0 sudo[238195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxafumbmetqxaogsfqfamhzgoyppkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839930.487832-1221-205792749943058/AnsiballZ_stat.py'
Jan 31 06:12:10 compute-0 sudo[238195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:10 compute-0 optimistic_antonelli[237891]: {}
Jan 31 06:12:10 compute-0 systemd[1]: libpod-252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3.scope: Deactivated successfully.
Jan 31 06:12:10 compute-0 podman[237874]: 2026-01-31 06:12:10.748758885 +0000 UTC m=+0.864532016 container died 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 06:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-167a5a7a67a86f930ec01450f24d7f1b0af69dd119e688fa04bc2cd86accf34b-merged.mount: Deactivated successfully.
Jan 31 06:12:10 compute-0 podman[237874]: 2026-01-31 06:12:10.803223982 +0000 UTC m=+0.918997113 container remove 252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_antonelli, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 06:12:10 compute-0 systemd[1]: libpod-conmon-252a135da3f8097f182936b0df9f0464673f107e9b528e7a904224daf20730a3.scope: Deactivated successfully.
Jan 31 06:12:10 compute-0 sudo[237668]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:12:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:12:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:10 compute-0 python3.9[238198]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:12:10 compute-0 sudo[238209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:12:10 compute-0 sudo[238209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:12:10 compute-0 sudo[238209]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:10 compute-0 sudo[238195]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:11 compute-0 sudo[238385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llxjkkcatsrrpxrhqbjeavzrfgvfnojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839931.2477484-1230-114705503651813/AnsiballZ_file.py'
Jan 31 06:12:11 compute-0 sudo[238385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:11 compute-0 python3.9[238387]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:12:11 compute-0 sudo[238385]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:12:11 compute-0 ceph-mon[75251]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:11 compute-0 sudo[238536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmsjosxrrknesixxgnaaiaoqdaoaxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839931.705134-1230-61850371018944/AnsiballZ_copy.py'
Jan 31 06:12:11 compute-0 sudo[238536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:12 compute-0 python3.9[238538]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839931.705134-1230-61850371018944/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:12:12 compute-0 sudo[238536]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:12 compute-0 sudo[238612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzslodjtepbhhltstlgyribsvcttnrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839931.705134-1230-61850371018944/AnsiballZ_systemd.py'
Jan 31 06:12:12 compute-0 sudo[238612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:12 compute-0 python3.9[238614]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:12:12 compute-0 systemd[1]: Reloading.
Jan 31 06:12:12 compute-0 systemd-rc-local-generator[238642]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:12:12 compute-0 systemd-sysv-generator[238645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:12:12 compute-0 sudo[238612]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:13 compute-0 sudo[238723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlfemetatvrbppcafuzxfhcazzbqpdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839931.705134-1230-61850371018944/AnsiballZ_systemd.py'
Jan 31 06:12:13 compute-0 sudo[238723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:13 compute-0 python3.9[238725]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:12:13 compute-0 systemd[1]: Reloading.
Jan 31 06:12:13 compute-0 systemd-rc-local-generator[238748]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:12:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:13 compute-0 systemd-sysv-generator[238753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 06:12:13 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 06:12:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:13 compute-0 podman[238764]: 2026-01-31 06:12:13.879798921 +0000 UTC m=+0.094315412 container init c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:12:13 compute-0 podman[238764]: 2026-01-31 06:12:13.885767429 +0000 UTC m=+0.100283900 container start c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 06:12:13 compute-0 nova_compute[238776]: + sudo -E kolla_set_configs
Jan 31 06:12:13 compute-0 podman[238764]: nova_compute
Jan 31 06:12:13 compute-0 systemd[1]: Started nova_compute container.
Jan 31 06:12:13 compute-0 sudo[238723]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Validating config file
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying service configuration files
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Deleting /etc/ceph
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Creating directory /etc/ceph
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Writing out command to execute
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:13 compute-0 nova_compute[238776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 06:12:13 compute-0 nova_compute[238776]: ++ cat /run_command
Jan 31 06:12:13 compute-0 nova_compute[238776]: + CMD=nova-compute
Jan 31 06:12:13 compute-0 nova_compute[238776]: + ARGS=
Jan 31 06:12:13 compute-0 nova_compute[238776]: + sudo kolla_copy_cacerts
Jan 31 06:12:13 compute-0 nova_compute[238776]: + [[ ! -n '' ]]
Jan 31 06:12:13 compute-0 nova_compute[238776]: + . kolla_extend_start
Jan 31 06:12:13 compute-0 nova_compute[238776]: Running command: 'nova-compute'
Jan 31 06:12:13 compute-0 nova_compute[238776]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 06:12:13 compute-0 nova_compute[238776]: + umask 0022
Jan 31 06:12:13 compute-0 nova_compute[238776]: + exec nova-compute
Jan 31 06:12:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:14 compute-0 ceph-mon[75251]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:14 compute-0 python3.9[238941]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:12:15 compute-0 python3.9[239092]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:15 compute-0 python3.9[239242]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:12:16 compute-0 ceph-mon[75251]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:16 compute-0 sudo[239392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyqteagsrdtttpuupweemwqwyiflswey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839936.200521-1290-73897419370135/AnsiballZ_podman_container.py'
Jan 31 06:12:16 compute-0 sudo[239392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:16 compute-0 nova_compute[238776]: 2026-01-31 06:12:16.875 238784 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:16 compute-0 nova_compute[238776]: 2026-01-31 06:12:16.876 238784 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:16 compute-0 nova_compute[238776]: 2026-01-31 06:12:16.876 238784 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:16 compute-0 nova_compute[238776]: 2026-01-31 06:12:16.876 238784 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 06:12:16 compute-0 python3.9[239394]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 06:12:17 compute-0 sudo[239392]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:12:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:12:17 compute-0 nova_compute[238776]: 2026-01-31 06:12:17.137 238784 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:12:17 compute-0 nova_compute[238776]: 2026-01-31 06:12:17.199 238784 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:12:17 compute-0 nova_compute[238776]: 2026-01-31 06:12:17.200 238784 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 06:12:17 compute-0 sudo[239573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mximvapazbkoitvnrfwbcezwdqykmman ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839937.1657784-1298-217157304839760/AnsiballZ_systemd.py'
Jan 31 06:12:17 compute-0 sudo[239573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:17 compute-0 python3.9[239575]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:12:17 compute-0 systemd[1]: Stopping nova_compute container...
Jan 31 06:12:18 compute-0 systemd[1]: libpod-c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7.scope: Deactivated successfully.
Jan 31 06:12:18 compute-0 systemd[1]: libpod-c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7.scope: Consumed 2.217s CPU time.
Jan 31 06:12:18 compute-0 podman[239579]: 2026-01-31 06:12:18.016769954 +0000 UTC m=+0.244940280 container died c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 06:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147-merged.mount: Deactivated successfully.
Jan 31 06:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7-userdata-shm.mount: Deactivated successfully.
Jan 31 06:12:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:21 compute-0 ceph-mon[75251]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:27 compute-0 ceph-mds[95670]: mds.beacon.cephfs.compute-0.olydew missed beacon ack from the monitors
Jan 31 06:12:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:28 compute-0 podman[239609]: 2026-01-31 06:12:28.378611748 +0000 UTC m=+0.048215751 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 06:12:28 compute-0 podman[239608]: 2026-01-31 06:12:28.429744101 +0000 UTC m=+0.105258900 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 06:12:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:31 compute-0 ceph-mon[75251]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:31 compute-0 ceph-mon[75251]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:31 compute-0 ceph-mon[75251]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:31 compute-0 ceph-mon[75251]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:37 compute-0 ceph-mon[75251]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:37 compute-0 ceph-mon[75251]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:37 compute-0 ceph-mon[75251]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:37 compute-0 podman[239579]: 2026-01-31 06:12:37.163645392 +0000 UTC m=+19.391815678 container cleanup c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible)
Jan 31 06:12:37 compute-0 podman[239579]: nova_compute
Jan 31 06:12:37 compute-0 podman[239653]: nova_compute
Jan 31 06:12:37 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 06:12:37 compute-0 systemd[1]: Stopped nova_compute container.
Jan 31 06:12:37 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 06:12:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd0fc254ef484a41a6e8d13a2bd313d0267eb61f82877114434c4a299173147/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:37 compute-0 podman[239666]: 2026-01-31 06:12:37.50068021 +0000 UTC m=+0.251807985 container init c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:12:37 compute-0 podman[239666]: 2026-01-31 06:12:37.505043283 +0000 UTC m=+0.256171028 container start c292ef9d9f955edd8745f5716ea608c5cf951bbcf6b32cf10bec14934480c4b7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:12:37 compute-0 nova_compute[239679]: + sudo -E kolla_set_configs
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Validating config file
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying service configuration files
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /etc/ceph
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Creating directory /etc/ceph
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 06:12:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Writing out command to execute
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:37 compute-0 nova_compute[239679]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 06:12:37 compute-0 nova_compute[239679]: ++ cat /run_command
Jan 31 06:12:37 compute-0 nova_compute[239679]: + CMD=nova-compute
Jan 31 06:12:37 compute-0 nova_compute[239679]: + ARGS=
Jan 31 06:12:37 compute-0 nova_compute[239679]: + sudo kolla_copy_cacerts
Jan 31 06:12:37 compute-0 nova_compute[239679]: + [[ ! -n '' ]]
Jan 31 06:12:37 compute-0 nova_compute[239679]: + . kolla_extend_start
Jan 31 06:12:37 compute-0 nova_compute[239679]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 06:12:37 compute-0 nova_compute[239679]: Running command: 'nova-compute'
Jan 31 06:12:37 compute-0 nova_compute[239679]: + umask 0022
Jan 31 06:12:37 compute-0 nova_compute[239679]: + exec nova-compute
Jan 31 06:12:37 compute-0 podman[239666]: nova_compute
Jan 31 06:12:37 compute-0 systemd[1]: Started nova_compute container.
Jan 31 06:12:37 compute-0 sudo[239573]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:38 compute-0 sudo[239841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgptyacwnwjsnmbmujexsexomdnmafl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769839957.882491-1307-182261601306078/AnsiballZ_podman_container.py'
Jan 31 06:12:38 compute-0 sudo[239841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:38 compute-0 ceph-mon[75251]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:38 compute-0 ceph-mon[75251]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:38 compute-0 ceph-mon[75251]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:38 compute-0 python3.9[239843]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 06:12:38 compute-0 systemd[1]: Started libpod-conmon-8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0.scope.
Jan 31 06:12:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601ccff88b708f43f46019d40d2924f1b0dc7012e794b2d8b709fb11e718407/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601ccff88b708f43f46019d40d2924f1b0dc7012e794b2d8b709fb11e718407/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601ccff88b708f43f46019d40d2924f1b0dc7012e794b2d8b709fb11e718407/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 06:12:38 compute-0 podman[239867]: 2026-01-31 06:12:38.976482131 +0000 UTC m=+0.567038606 container init 8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:12:38 compute-0 podman[239867]: 2026-01-31 06:12:38.981518783 +0000 UTC m=+0.572075258 container start 8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 06:12:39 compute-0 nova_compute_init[239889]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 06:12:39 compute-0 systemd[1]: libpod-8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0.scope: Deactivated successfully.
Jan 31 06:12:39 compute-0 python3.9[239843]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 06:12:39 compute-0 podman[239890]: 2026-01-31 06:12:39.318235582 +0000 UTC m=+0.153172562 container died 8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.365 239684 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.366 239684 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.366 239684 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.366 239684 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.487 239684 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.499 239684 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:12:39 compute-0 nova_compute[239679]: 2026-01-31 06:12:39.499 239684 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 06:12:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:40 compute-0 ceph-mon[75251]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0-userdata-shm.mount: Deactivated successfully.
Jan 31 06:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5601ccff88b708f43f46019d40d2924f1b0dc7012e794b2d8b709fb11e718407-merged.mount: Deactivated successfully.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.394 239684 INFO nova.virt.driver [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 06:12:40 compute-0 podman[239890]: 2026-01-31 06:12:40.40715034 +0000 UTC m=+1.242087310 container cleanup 8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:12:40 compute-0 systemd[1]: libpod-conmon-8fa63050b821d59072bd83b622c41471f6850470d3f92a2e85e59945de42d4f0.scope: Deactivated successfully.
Jan 31 06:12:40 compute-0 sudo[239841]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.693 239684 INFO nova.compute.provider_config [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.811 239684 DEBUG oslo_concurrency.lockutils [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.811 239684 DEBUG oslo_concurrency.lockutils [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.812 239684 DEBUG oslo_concurrency.lockutils [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.812 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.813 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.813 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.813 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.813 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.814 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.814 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.815 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.815 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.815 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.815 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.816 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.816 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.816 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.816 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.817 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.817 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.817 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.818 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.818 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.818 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.819 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.819 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.819 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.819 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.820 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.820 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.820 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.821 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.821 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.821 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.821 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.822 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.822 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.822 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.823 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.823 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.823 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.824 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.824 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.824 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.825 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.825 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.825 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.826 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.826 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.826 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.826 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.827 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.827 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.827 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.828 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.828 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.828 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.829 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.829 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.829 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.830 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.830 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.830 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.830 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.831 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.831 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.831 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.832 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.832 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.832 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.833 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.833 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.833 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.834 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.834 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.834 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.834 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.835 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.835 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.835 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.836 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.836 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.836 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.837 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.837 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.837 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.837 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.838 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.838 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.838 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.839 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.839 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.839 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.840 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.840 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.840 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.840 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.840 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.841 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.842 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.842 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.842 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.842 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.843 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.844 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.844 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.844 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.844 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.845 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.845 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.845 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.845 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.846 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.846 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.846 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.846 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.846 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.847 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.847 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.847 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.847 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.847 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.848 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.848 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.848 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.848 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.848 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.849 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.849 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.849 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.849 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.849 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.850 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.850 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.850 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.850 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.850 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.851 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.851 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.851 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.851 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.851 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.852 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.853 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.853 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.853 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.853 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.854 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.854 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.854 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.854 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.854 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.855 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.855 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.855 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.855 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.855 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.856 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.856 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.856 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.856 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.856 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.857 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.857 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.857 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.857 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.858 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.858 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.858 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.858 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.859 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.859 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.859 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.859 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.859 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.860 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.860 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.860 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.860 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.861 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.861 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.861 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.861 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.861 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.862 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.862 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.862 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.862 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.862 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.863 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.863 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.863 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.863 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.863 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.864 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.864 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.864 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.864 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.864 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.865 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.865 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.865 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.865 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.865 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.866 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.866 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.866 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.866 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.866 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.867 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.868 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.869 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.870 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.871 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.872 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.873 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.874 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.875 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.876 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.877 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.878 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.879 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.880 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.881 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.882 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.883 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.884 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.885 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.886 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.887 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.888 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.889 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.890 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.891 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.892 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.893 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.894 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.895 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.896 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.897 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.898 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.899 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.900 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.901 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.902 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.902 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.902 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.902 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.902 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.903 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.904 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.905 239684 WARNING oslo_config.cfg [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 06:12:40 compute-0 nova_compute[239679]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 06:12:40 compute-0 nova_compute[239679]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 06:12:40 compute-0 nova_compute[239679]: and ``live_migration_inbound_addr`` respectively.
Jan 31 06:12:40 compute-0 nova_compute[239679]: ).  Its value may be silently ignored in the future.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.906 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.907 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rbd_secret_uuid        = 797ee2fc-ca49-5eee-87c0-542bb035a7d7 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.908 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.909 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.910 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.911 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.912 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.913 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.914 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.915 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.916 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.917 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.918 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.919 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.920 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.921 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.922 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.923 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.924 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.925 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.926 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.926 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.926 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.926 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.926 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.927 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.927 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.927 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.927 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.927 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.928 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.928 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.928 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.928 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.928 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.929 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.929 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.929 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.929 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.929 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.930 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.930 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.930 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.930 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.930 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.931 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.931 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.931 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.931 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.931 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.932 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.932 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.932 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.932 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.932 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.933 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.933 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.933 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.933 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.934 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.934 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.934 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.934 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.934 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.935 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.936 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.936 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.936 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.936 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.936 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.937 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.937 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.937 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.937 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.938 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.938 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.938 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.938 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.938 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.939 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.939 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.939 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.939 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.939 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.940 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.940 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.940 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.940 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.940 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.941 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.941 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.941 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.941 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.941 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.942 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.942 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.942 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.942 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.942 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.943 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.943 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.943 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.943 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.943 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.944 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.945 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.945 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.945 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.945 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.945 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.946 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.946 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.946 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.946 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.946 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.947 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.947 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.947 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.947 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.948 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.948 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.948 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.948 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.948 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.949 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.949 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.949 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.949 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.949 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.950 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.950 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.950 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.950 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.950 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.951 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.951 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.951 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.951 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.951 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.952 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.952 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 sshd-session[215375]: Connection closed by 192.168.122.30 port 53670
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.952 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.952 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.952 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.953 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.953 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.953 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.953 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.953 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.954 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.954 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.954 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.954 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.954 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.955 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.955 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.955 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.955 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.955 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.956 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.956 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.956 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.956 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 sshd-session[215365]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.956 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.957 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.957 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.957 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.957 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.957 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.958 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.958 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.958 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.958 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 systemd[1]: session-49.scope: Consumed 1min 40.198s CPU time.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.959 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.959 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.959 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.959 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 systemd-logind[797]: Session 49 logged out. Waiting for processes to exit.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.960 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.960 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.960 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.961 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.961 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.961 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 systemd-logind[797]: Removed session 49.
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.961 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.961 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.962 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.962 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.962 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.962 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.962 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.963 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.964 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.965 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.966 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.966 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.966 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.966 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.966 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.967 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.968 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.969 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.970 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.971 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.971 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.971 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.971 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.971 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.972 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.973 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.974 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.974 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.974 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.974 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.974 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.975 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.976 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.977 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.978 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.979 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.979 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.979 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.979 239684 DEBUG oslo_service.service [None req-64179dc9-bea1-47a7-8d52-23d59d58faa0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 06:12:40 compute-0 nova_compute[239679]: 2026-01-31 06:12:40.980 239684 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.163 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.164 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.164 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.164 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 06:12:41 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 06:12:41 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.298 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f571ad26fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.301 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f571ad26fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.302 239684 INFO nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Connection event '1' reason 'None'
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.336 239684 WARNING nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 06:12:41 compute-0 nova_compute[239679]: 2026-01-31 06:12:41.336 239684 DEBUG nova.virt.libvirt.volume.mount [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 06:12:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.238 239684 INFO nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]: 
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <host>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <uuid>96867758-a14c-47b4-9648-f2b42c325de8</uuid>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <arch>x86_64</arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model>EPYC-Rome-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <vendor>AMD</vendor>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <microcode version='16777317'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <signature family='23' model='49' stepping='0'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='x2apic'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='tsc-deadline'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='osxsave'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='hypervisor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='tsc_adjust'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='spec-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='stibp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='arch-capabilities'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='cmp_legacy'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='topoext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='virt-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='lbrv'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='tsc-scale'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='vmcb-clean'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='pause-filter'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='pfthreshold'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='svme-addr-chk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='rdctl-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='mds-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature name='pschange-mc-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <pages unit='KiB' size='4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <pages unit='KiB' size='2048'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <pages unit='KiB' size='1048576'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <power_management>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <suspend_mem/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </power_management>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <iommu support='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <migration_features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <live/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <uri_transports>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <uri_transport>tcp</uri_transport>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <uri_transport>rdma</uri_transport>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </uri_transports>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </migration_features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <topology>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <cells num='1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <cell id='0'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <memory unit='KiB'>7864288</memory>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <pages unit='KiB' size='4'>1966072</pages>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <distances>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <sibling id='0' value='10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           </distances>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           <cpus num='8'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:           </cpus>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         </cell>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </cells>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </topology>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <cache>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </cache>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <secmodel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model>selinux</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <doi>0</doi>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </secmodel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <secmodel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model>dac</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <doi>0</doi>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </secmodel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </host>
Jan 31 06:12:42 compute-0 nova_compute[239679]: 
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <guest>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <os_type>hvm</os_type>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <arch name='i686'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <wordsize>32</wordsize>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <domain type='qemu'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <domain type='kvm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <pae/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <nonpae/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <acpi default='on' toggle='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <apic default='on' toggle='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <cpuselection/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <deviceboot/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <disksnapshot default='on' toggle='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <externalSnapshot/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </guest>
Jan 31 06:12:42 compute-0 nova_compute[239679]: 
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <guest>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <os_type>hvm</os_type>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <arch name='x86_64'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <wordsize>64</wordsize>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <domain type='qemu'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <domain type='kvm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <acpi default='on' toggle='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <apic default='on' toggle='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <cpuselection/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <deviceboot/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <disksnapshot default='on' toggle='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <externalSnapshot/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </guest>
Jan 31 06:12:42 compute-0 nova_compute[239679]: 
Jan 31 06:12:42 compute-0 nova_compute[239679]: </capabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]: 
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.245 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.316 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 06:12:42 compute-0 nova_compute[239679]: <domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <domain>kvm</domain>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <arch>i686</arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <vcpu max='4096'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <iothreads supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <os supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='firmware'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <loader supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>rom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pflash</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='readonly'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>yes</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='secure'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </loader>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </os>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-passthrough' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='hostPassthroughMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='maximum' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='maximumMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-model' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <vendor>AMD</vendor>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='x2apic'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='hypervisor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='stibp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='overflow-recov'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='succor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lbrv'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-scale'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='flushbyasid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pause-filter'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pfthreshold'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='disable' name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='custom' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Dhyana-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v6'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v7'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <memoryBacking supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='sourceType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>anonymous</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>memfd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </memoryBacking>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <disk supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='diskDevice'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>disk</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cdrom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>floppy</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>lun</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>fdc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>sata</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </disk>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <graphics supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vnc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egl-headless</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </graphics>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <video supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='modelType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vga</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cirrus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>none</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>bochs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ramfb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </video>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hostdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='mode'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>subsystem</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='startupPolicy'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>mandatory</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>requisite</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>optional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='subsysType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pci</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='capsType'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='pciBackend'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hostdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <rng supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>random</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </rng>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <filesystem supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='driverType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>path</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>handle</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtiofs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </filesystem>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tpm supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-tis</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-crb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emulator</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>external</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendVersion'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>2.0</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </tpm>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <redirdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </redirdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <channel supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </channel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <crypto supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </crypto>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <interface supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>passt</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </interface>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <panic supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>isa</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>hyperv</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </panic>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <console supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>null</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dev</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pipe</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stdio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>udp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tcp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu-vdagent</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </console>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <gic supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <vmcoreinfo supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <genid supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backingStoreInput supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backup supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <async-teardown supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <s390-pv supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <ps2 supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tdx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sev supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sgx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hyperv supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='features'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>relaxed</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vapic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>spinlocks</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vpindex</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>runtime</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>synic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stimer</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reset</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vendor_id</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>frequencies</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reenlightenment</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tlbflush</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ipi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>avic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emsr_bitmap</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>xmm_input</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <spinlocks>4095</spinlocks>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <stimer_direct>on</stimer_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hyperv>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <launchSecurity supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]: </domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.324 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 06:12:42 compute-0 nova_compute[239679]: <domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <domain>kvm</domain>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <arch>i686</arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <vcpu max='240'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <iothreads supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <os supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='firmware'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <loader supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>rom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pflash</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='readonly'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>yes</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='secure'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </loader>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </os>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-passthrough' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='hostPassthroughMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='maximum' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='maximumMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-model' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <vendor>AMD</vendor>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='x2apic'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='hypervisor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='stibp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='overflow-recov'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='succor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lbrv'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-scale'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='flushbyasid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pause-filter'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pfthreshold'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='disable' name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='custom' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Dhyana-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v6'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v7'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <memoryBacking supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='sourceType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>anonymous</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>memfd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </memoryBacking>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <disk supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='diskDevice'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>disk</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cdrom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>floppy</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>lun</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ide</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>fdc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>sata</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </disk>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <graphics supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vnc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egl-headless</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </graphics>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <video supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='modelType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vga</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cirrus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>none</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>bochs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ramfb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </video>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hostdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='mode'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>subsystem</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='startupPolicy'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>mandatory</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>requisite</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>optional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='subsysType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pci</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='capsType'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='pciBackend'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hostdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <rng supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>random</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </rng>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <filesystem supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='driverType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>path</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>handle</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtiofs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </filesystem>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tpm supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-tis</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-crb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emulator</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>external</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendVersion'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>2.0</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </tpm>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <redirdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </redirdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <channel supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </channel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <crypto supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </crypto>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <interface supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>passt</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </interface>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <panic supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>isa</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>hyperv</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </panic>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <console supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>null</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dev</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pipe</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stdio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>udp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tcp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu-vdagent</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </console>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <gic supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <vmcoreinfo supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <genid supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backingStoreInput supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backup supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <async-teardown supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <s390-pv supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <ps2 supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tdx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sev supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sgx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hyperv supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='features'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>relaxed</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vapic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>spinlocks</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vpindex</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>runtime</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>synic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stimer</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reset</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vendor_id</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>frequencies</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reenlightenment</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tlbflush</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ipi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>avic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emsr_bitmap</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>xmm_input</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <spinlocks>4095</spinlocks>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <stimer_direct>on</stimer_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hyperv>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <launchSecurity supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]: </domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.368 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.373 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 06:12:42 compute-0 nova_compute[239679]: <domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <domain>kvm</domain>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <arch>x86_64</arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <vcpu max='4096'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <iothreads supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <os supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='firmware'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>efi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <loader supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>rom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pflash</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='readonly'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>yes</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='secure'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>yes</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </loader>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </os>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-passthrough' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='hostPassthroughMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='maximum' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='maximumMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-model' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <vendor>AMD</vendor>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='x2apic'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='hypervisor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='stibp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='overflow-recov'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='succor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lbrv'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-scale'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='flushbyasid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pause-filter'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pfthreshold'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='disable' name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='custom' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Dhyana-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v6'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v7'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <memoryBacking supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='sourceType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>anonymous</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>memfd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </memoryBacking>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <disk supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='diskDevice'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>disk</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cdrom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>floppy</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>lun</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>fdc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>sata</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </disk>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <graphics supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vnc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egl-headless</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </graphics>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <video supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='modelType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vga</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cirrus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>none</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>bochs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ramfb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </video>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hostdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='mode'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>subsystem</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='startupPolicy'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>mandatory</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>requisite</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>optional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='subsysType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pci</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='capsType'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='pciBackend'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hostdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <rng supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>random</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </rng>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <filesystem supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='driverType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>path</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>handle</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtiofs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </filesystem>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tpm supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-tis</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-crb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emulator</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>external</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendVersion'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>2.0</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </tpm>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <redirdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </redirdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <channel supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </channel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <crypto supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </crypto>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <interface supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>passt</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </interface>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <panic supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>isa</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>hyperv</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </panic>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <console supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>null</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dev</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pipe</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stdio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>udp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tcp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu-vdagent</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </console>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <gic supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <vmcoreinfo supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <genid supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backingStoreInput supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backup supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <async-teardown supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <s390-pv supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <ps2 supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tdx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sev supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sgx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hyperv supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='features'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>relaxed</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vapic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>spinlocks</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vpindex</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>runtime</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>synic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stimer</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reset</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vendor_id</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>frequencies</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reenlightenment</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tlbflush</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ipi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>avic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emsr_bitmap</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>xmm_input</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <spinlocks>4095</spinlocks>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <stimer_direct>on</stimer_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hyperv>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <launchSecurity supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]: </domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.437 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 06:12:42 compute-0 nova_compute[239679]: <domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <domain>kvm</domain>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <arch>x86_64</arch>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <vcpu max='240'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <iothreads supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <os supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='firmware'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <loader supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>rom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pflash</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='readonly'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>yes</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='secure'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>no</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </loader>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </os>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-passthrough' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='hostPassthroughMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='maximum' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='maximumMigratable'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>on</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>off</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='host-model' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <vendor>AMD</vendor>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='x2apic'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='hypervisor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='stibp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='overflow-recov'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='succor'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lbrv'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='tsc-scale'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='flushbyasid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pause-filter'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='pfthreshold'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <feature policy='disable' name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <mode name='custom' supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Broadwell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='ClearwaterForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ddpd-u'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sha512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm3'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sm4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Cooperlake-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Denverton-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Dhyana-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Milan-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Rome-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-Turin-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amd-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='auto-ibrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vp2intersect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fs-gs-base-ns'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibpb-brtype'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='no-nested-data-bp'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='null-sel-clr-base'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='perfmon-v2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbpb'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='srso-user-kernel-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='stibp-always-on'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='EPYC-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='GraniteRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-128'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-256'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx10-512'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='prefetchiti'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Haswell-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v6'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Icelake-Server-v7'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='IvyBridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='KnightsMill-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4fmaps'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-4vnniw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512er'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512pf'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G4-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Opteron_G5-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fma4'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tbm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xop'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SapphireRapids-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='amx-tile'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-bf16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-fp16'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512-vpopcntdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bitalg'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vbmi2'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrc'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fzrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='la57'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='taa-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='tsx-ldtrk'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='SierraForest-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ifma'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-ne-convert'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx-vnni-int8'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bhi-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='bus-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cmpccxadd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fbsdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='fsrs'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ibrs-all'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='intel-psfd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ipred-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='lam'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mcdt-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pbrsb-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='psdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rrsba-ctrl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='sbdr-ssdp-no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='serialize'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vaes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='vpclmulqdq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Client-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='hle'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='rtm'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Skylake-Server-v5'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512bw'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512cd'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512dq'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512f'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='avx512vl'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='invpcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pcid'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='pku'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='mpx'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v2'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v3'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='core-capability'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='split-lock-detect'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='Snowridge-v4'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='cldemote'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='erms'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='gfni'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdir64b'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='movdiri'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='xsaves'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='athlon-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='core2duo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='coreduo-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='n270-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='ss'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <blockers model='phenom-v1'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnow'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <feature name='3dnowext'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </blockers>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </mode>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </cpu>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <memoryBacking supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <enum name='sourceType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>anonymous</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <value>memfd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </memoryBacking>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <disk supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='diskDevice'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>disk</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cdrom</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>floppy</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>lun</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ide</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>fdc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>sata</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </disk>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <graphics supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vnc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egl-headless</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </graphics>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <video supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='modelType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vga</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>cirrus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>none</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>bochs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ramfb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </video>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hostdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='mode'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>subsystem</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='startupPolicy'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>mandatory</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>requisite</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>optional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='subsysType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pci</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>scsi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='capsType'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='pciBackend'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hostdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <rng supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtio-non-transitional</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>random</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>egd</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </rng>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <filesystem supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='driverType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>path</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>handle</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>virtiofs</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </filesystem>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tpm supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-tis</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tpm-crb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emulator</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>external</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendVersion'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>2.0</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </tpm>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <redirdev supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='bus'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>usb</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </redirdev>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <channel supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </channel>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <crypto supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendModel'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>builtin</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </crypto>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <interface supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='backendType'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>default</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>passt</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </interface>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <panic supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='model'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>isa</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>hyperv</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </panic>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <console supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='type'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>null</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vc</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pty</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dev</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>file</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>pipe</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stdio</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>udp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tcp</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>unix</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>qemu-vdagent</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>dbus</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </console>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </devices>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   <features>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <gic supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <vmcoreinfo supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <genid supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backingStoreInput supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <backup supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <async-teardown supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <s390-pv supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <ps2 supported='yes'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <tdx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sev supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <sgx supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <hyperv supported='yes'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <enum name='features'>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>relaxed</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vapic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>spinlocks</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vpindex</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>runtime</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>synic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>stimer</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reset</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>vendor_id</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>frequencies</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>reenlightenment</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>tlbflush</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>ipi</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>avic</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>emsr_bitmap</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <value>xmm_input</value>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </enum>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       <defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <spinlocks>4095</spinlocks>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <stimer_direct>on</stimer_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 06:12:42 compute-0 nova_compute[239679]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 06:12:42 compute-0 nova_compute[239679]:       </defaults>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     </hyperv>
Jan 31 06:12:42 compute-0 nova_compute[239679]:     <launchSecurity supported='no'/>
Jan 31 06:12:42 compute-0 nova_compute[239679]:   </features>
Jan 31 06:12:42 compute-0 nova_compute[239679]: </domainCapabilities>
Jan 31 06:12:42 compute-0 nova_compute[239679]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.498 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.498 239684 INFO nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Secure Boot support detected
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.500 239684 INFO nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.500 239684 INFO nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.507 239684 DEBUG nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 06:12:42 compute-0 ceph-mon[75251]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:42 compute-0 nova_compute[239679]: 2026-01-31 06:12:42.941 239684 INFO nova.virt.node [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Determined node identity b3aa6abb-42c7-4433-b36f-4272440bddc9 from /var/lib/nova/compute_id
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.108 239684 WARNING nova.compute.manager [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Compute nodes ['b3aa6abb-42c7-4433-b36f-4272440bddc9'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.243 239684 INFO nova.compute.manager [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 06:12:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.794 239684 WARNING nova.compute.manager [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.795 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.795 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.795 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.795 239684 DEBUG nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:12:43 compute-0 nova_compute[239679]: 2026-01-31 06:12:43.795 239684 DEBUG oslo_concurrency.processutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:12:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:12:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417582215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.314 239684 DEBUG oslo_concurrency.processutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:12:44 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 06:12:44 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 06:12:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:12:44
Jan 31 06:12:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:12:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:12:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Jan 31 06:12:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.620 239684 WARNING nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.621 239684 DEBUG nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.621 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.621 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:12:44 compute-0 ceph-mon[75251]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:44 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1417582215' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.822 239684 WARNING nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] No compute node record for compute-0.ctlplane.example.com:b3aa6abb-42c7-4433-b36f-4272440bddc9: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b3aa6abb-42c7-4433-b36f-4272440bddc9 could not be found.
Jan 31 06:12:44 compute-0 nova_compute[239679]: 2026-01-31 06:12:44.929 239684 INFO nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: b3aa6abb-42c7-4433-b36f-4272440bddc9
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:12:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:45 compute-0 nova_compute[239679]: 2026-01-31 06:12:45.655 239684 DEBUG nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:12:45 compute-0 nova_compute[239679]: 2026-01-31 06:12:45.656 239684 DEBUG nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:12:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:46 compute-0 ceph-mon[75251]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:47 compute-0 nova_compute[239679]: 2026-01-31 06:12:47.162 239684 INFO nova.scheduler.client.report [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] [req-b80aa052-cec9-4c80-9d2c-d791cea0db3e] Created resource provider record via placement API for resource provider with UUID b3aa6abb-42c7-4433-b36f-4272440bddc9 and name compute-0.ctlplane.example.com.
Jan 31 06:12:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:47 compute-0 nova_compute[239679]: 2026-01-31 06:12:47.858 239684 DEBUG oslo_concurrency.processutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:12:47 compute-0 ceph-mon[75251]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:12:48 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940502921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.404 239684 DEBUG oslo_concurrency.processutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.407 239684 DEBUG nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 06:12:48 compute-0 nova_compute[239679]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.408 239684 INFO nova.virt.libvirt.host [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] kernel doesn't support AMD SEV
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.408 239684 DEBUG nova.compute.provider_tree [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Updating inventory in ProviderTree for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.409 239684 DEBUG nova.virt.libvirt.driver [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 06:12:48 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1940502921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.970 239684 DEBUG nova.scheduler.client.report [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Updated inventory for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.970 239684 DEBUG nova.compute.provider_tree [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Updating resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 06:12:48 compute-0 nova_compute[239679]: 2026-01-31 06:12:48.970 239684 DEBUG nova.compute.provider_tree [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Updating inventory in ProviderTree for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 06:12:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:50 compute-0 ceph-mon[75251]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:50 compute-0 nova_compute[239679]: 2026-01-31 06:12:50.129 239684 DEBUG nova.compute.provider_tree [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Updating resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 06:12:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:12:50.206 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:12:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:12:50.206 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:12:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:12:50.206 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:12:50 compute-0 nova_compute[239679]: 2026-01-31 06:12:50.705 239684 DEBUG nova.compute.resource_tracker [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:12:50 compute-0 nova_compute[239679]: 2026-01-31 06:12:50.706 239684 DEBUG oslo_concurrency.lockutils [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:12:50 compute-0 nova_compute[239679]: 2026-01-31 06:12:50.706 239684 DEBUG nova.service [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 06:12:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:52 compute-0 ceph-mon[75251]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:53 compute-0 nova_compute[239679]: 2026-01-31 06:12:53.109 239684 DEBUG nova.service [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 06:12:53 compute-0 nova_compute[239679]: 2026-01-31 06:12:53.110 239684 DEBUG nova.servicegroup.drivers.db [None req-bdd0ae81-dc5a-4330-a60a-7d5fefb49121 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 06:12:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:53 compute-0 ceph-mon[75251]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:12:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:12:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:12:56 compute-0 ceph-mon[75251]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:57 compute-0 ceph-mon[75251]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:12:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:00 compute-0 ceph-mon[75251]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:02 compute-0 ceph-mon[75251]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:04 compute-0 ceph-mon[75251]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:06 compute-0 ceph-mon[75251]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:07 compute-0 podman[240089]: 2026-01-31 06:13:07.155184856 +0000 UTC m=+0.071325123 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:13:07 compute-0 podman[240088]: 2026-01-31 06:13:07.155972168 +0000 UTC m=+0.073979758 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:13:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:07 compute-0 ceph-mon[75251]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:10 compute-0 ceph-mon[75251]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:10 compute-0 sudo[240131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:13:10 compute-0 sudo[240131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:10 compute-0 sudo[240131]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:11 compute-0 sudo[240156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:13:11 compute-0 sudo[240156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:11 compute-0 sudo[240156]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:13:11 compute-0 sudo[240213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:13:11 compute-0 sudo[240213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:11 compute-0 sudo[240213]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:11 compute-0 sudo[240238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:13:11 compute-0 sudo[240238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:13:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.829768428 +0000 UTC m=+0.037106268 container create 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:13:11 compute-0 systemd[1]: Started libpod-conmon-84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2.scope.
Jan 31 06:13:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.893785714 +0000 UTC m=+0.101123564 container init 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.901553373 +0000 UTC m=+0.108891213 container start 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.904721102 +0000 UTC m=+0.112058942 container attach 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.811093711 +0000 UTC m=+0.018431561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:11 compute-0 laughing_galois[240291]: 167 167
Jan 31 06:13:11 compute-0 systemd[1]: libpod-84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2.scope: Deactivated successfully.
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.909144477 +0000 UTC m=+0.116482327 container died 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f7b292b9ee8402fc5b66ef6a232eb28f54507a3d49c810925f4431eb843e7f-merged.mount: Deactivated successfully.
Jan 31 06:13:11 compute-0 podman[240275]: 2026-01-31 06:13:11.965686322 +0000 UTC m=+0.173024162 container remove 84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 06:13:11 compute-0 systemd[1]: libpod-conmon-84ad05214dac99d413bcf9476adcd0490eaef13f67b53544aed7430cecca2ff2.scope: Deactivated successfully.
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.137085348 +0000 UTC m=+0.059387347 container create a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 06:13:12 compute-0 systemd[1]: Started libpod-conmon-a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9.scope.
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.113372469 +0000 UTC m=+0.035674488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.270263455 +0000 UTC m=+0.192565464 container init a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.276593163 +0000 UTC m=+0.198895162 container start a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.323827336 +0000 UTC m=+0.246129325 container attach a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:13:12 compute-0 fervent_napier[240332]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:13:12 compute-0 fervent_napier[240332]: --> All data devices are unavailable
Jan 31 06:13:12 compute-0 ceph-mon[75251]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:12 compute-0 systemd[1]: libpod-a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9.scope: Deactivated successfully.
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.659001601 +0000 UTC m=+0.581303590 container died a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf064a390e4283522c8395545809c42899cdb595a8b0f3396378618941feb59-merged.mount: Deactivated successfully.
Jan 31 06:13:12 compute-0 podman[240315]: 2026-01-31 06:13:12.72488562 +0000 UTC m=+0.647187609 container remove a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:13:12 compute-0 systemd[1]: libpod-conmon-a2a8fe8e1b3bdcb3244440a8513b40e0a4308315cc53ab222998449a0ac51bb9.scope: Deactivated successfully.
Jan 31 06:13:12 compute-0 sudo[240238]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:12 compute-0 sudo[240366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:13:12 compute-0 sudo[240366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:12 compute-0 sudo[240366]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:12 compute-0 sudo[240391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:13:12 compute-0 sudo[240391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.131628284 +0000 UTC m=+0.030067839 container create 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 06:13:13 compute-0 systemd[1]: Started libpod-conmon-785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b.scope.
Jan 31 06:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.187818319 +0000 UTC m=+0.086257884 container init 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.192074449 +0000 UTC m=+0.090514004 container start 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:13:13 compute-0 compassionate_hamilton[240444]: 167 167
Jan 31 06:13:13 compute-0 systemd[1]: libpod-785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b.scope: Deactivated successfully.
Jan 31 06:13:13 compute-0 conmon[240444]: conmon 785115daf3174475da97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b.scope/container/memory.events
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.19776453 +0000 UTC m=+0.096204115 container attach 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.198001186 +0000 UTC m=+0.096440741 container died 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.118534755 +0000 UTC m=+0.016974310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ecc7760538b20e64fc4f7ffe3fb667112fee39ef929cd8c2ed03e6485f3bc17-merged.mount: Deactivated successfully.
Jan 31 06:13:13 compute-0 podman[240428]: 2026-01-31 06:13:13.260790518 +0000 UTC m=+0.159230073 container remove 785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:13:13 compute-0 systemd[1]: libpod-conmon-785115daf3174475da978992abfcafa07433c37219a3cddc89bd251dc68ac00b.scope: Deactivated successfully.
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.40516436 +0000 UTC m=+0.059090177 container create 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:13:13 compute-0 systemd[1]: Started libpod-conmon-876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839.scope.
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.376327377 +0000 UTC m=+0.030253224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02e92cab432cd29b3f6705ee7aea3bdc1e1e35d5b1cc86685f899f4bf3228bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02e92cab432cd29b3f6705ee7aea3bdc1e1e35d5b1cc86685f899f4bf3228bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02e92cab432cd29b3f6705ee7aea3bdc1e1e35d5b1cc86685f899f4bf3228bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02e92cab432cd29b3f6705ee7aea3bdc1e1e35d5b1cc86685f899f4bf3228bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.51572779 +0000 UTC m=+0.169653647 container init 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.523445977 +0000 UTC m=+0.177371784 container start 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.531781052 +0000 UTC m=+0.185706859 container attach 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:13:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:13 compute-0 gifted_jemison[240487]: {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     "0": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "devices": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "/dev/loop3"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             ],
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_name": "ceph_lv0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_size": "21470642176",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "name": "ceph_lv0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "tags": {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_name": "ceph",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.crush_device_class": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.encrypted": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.objectstore": "bluestore",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_id": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.vdo": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.with_tpm": "0"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             },
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "vg_name": "ceph_vg0"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         }
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     ],
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     "1": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "devices": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "/dev/loop4"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             ],
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_name": "ceph_lv1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_size": "21470642176",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "name": "ceph_lv1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "tags": {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_name": "ceph",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.crush_device_class": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.encrypted": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.objectstore": "bluestore",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_id": "1",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.vdo": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.with_tpm": "0"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             },
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "vg_name": "ceph_vg1"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         }
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     ],
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     "2": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "devices": [
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "/dev/loop5"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             ],
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_name": "ceph_lv2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_size": "21470642176",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "name": "ceph_lv2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "tags": {
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.cluster_name": "ceph",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.crush_device_class": "",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.encrypted": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.objectstore": "bluestore",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osd_id": "2",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.vdo": "0",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:                 "ceph.with_tpm": "0"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             },
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "type": "block",
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:             "vg_name": "ceph_vg2"
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:         }
Jan 31 06:13:13 compute-0 gifted_jemison[240487]:     ]
Jan 31 06:13:13 compute-0 gifted_jemison[240487]: }
Jan 31 06:13:13 compute-0 systemd[1]: libpod-876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839.scope: Deactivated successfully.
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.801142691 +0000 UTC m=+0.455068528 container died 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 06:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a02e92cab432cd29b3f6705ee7aea3bdc1e1e35d5b1cc86685f899f4bf3228bb-merged.mount: Deactivated successfully.
Jan 31 06:13:13 compute-0 podman[240471]: 2026-01-31 06:13:13.87694967 +0000 UTC m=+0.530875477 container remove 876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:13:13 compute-0 systemd[1]: libpod-conmon-876bcc6c9eafabfc8dbff80c64b4bc0903e9d10e5a84e5df4f6095f9a57a1839.scope: Deactivated successfully.
Jan 31 06:13:13 compute-0 sudo[240391]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:13 compute-0 sudo[240509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:13:13 compute-0 sudo[240509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:13 compute-0 sudo[240509]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:14 compute-0 sudo[240534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:13:14 compute-0 sudo[240534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.338204761 +0000 UTC m=+0.047020568 container create 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 06:13:14 compute-0 systemd[1]: Started libpod-conmon-9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5.scope.
Jan 31 06:13:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.313262597 +0000 UTC m=+0.022078424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.518155357 +0000 UTC m=+0.226971164 container init 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.526714178 +0000 UTC m=+0.235529995 container start 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:13:14 compute-0 dreamy_lehmann[240588]: 167 167
Jan 31 06:13:14 compute-0 systemd[1]: libpod-9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5.scope: Deactivated successfully.
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.594620514 +0000 UTC m=+0.303436311 container attach 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:13:14 compute-0 podman[240571]: 2026-01-31 06:13:14.595090067 +0000 UTC m=+0.303905864 container died 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:13:14 compute-0 ceph-mon[75251]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-82403d55ade59215ce14776bbadd523bc00908f2d40e7a04ffad2521f03f824b-merged.mount: Deactivated successfully.
Jan 31 06:13:15 compute-0 podman[240571]: 2026-01-31 06:13:15.2318425 +0000 UTC m=+0.940658297 container remove 9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:13:15 compute-0 systemd[1]: libpod-conmon-9f33ecc7ca698989b0d7b81c5b4953322d00c89053d8fe414df0be30378dead5.scope: Deactivated successfully.
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:15 compute-0 podman[240614]: 2026-01-31 06:13:15.377036766 +0000 UTC m=+0.050628510 container create f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 06:13:15 compute-0 systemd[1]: Started libpod-conmon-f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34.scope.
Jan 31 06:13:15 compute-0 podman[240614]: 2026-01-31 06:13:15.347325598 +0000 UTC m=+0.020917372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2a3035c8428d93140464077538e70e7d95072cac2656f24048e8fcfb4bd665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2a3035c8428d93140464077538e70e7d95072cac2656f24048e8fcfb4bd665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2a3035c8428d93140464077538e70e7d95072cac2656f24048e8fcfb4bd665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2a3035c8428d93140464077538e70e7d95072cac2656f24048e8fcfb4bd665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:13:15 compute-0 podman[240614]: 2026-01-31 06:13:15.470440971 +0000 UTC m=+0.144032765 container init f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:13:15 compute-0 podman[240614]: 2026-01-31 06:13:15.475172784 +0000 UTC m=+0.148764548 container start f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 31 06:13:15 compute-0 podman[240614]: 2026-01-31 06:13:15.487028889 +0000 UTC m=+0.160620643 container attach f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 06:13:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:15 compute-0 ceph-mon[75251]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:16 compute-0 lvm[240709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:13:16 compute-0 lvm[240710]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:13:16 compute-0 lvm[240710]: VG ceph_vg1 finished
Jan 31 06:13:16 compute-0 lvm[240709]: VG ceph_vg0 finished
Jan 31 06:13:16 compute-0 lvm[240712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:13:16 compute-0 lvm[240712]: VG ceph_vg2 finished
Jan 31 06:13:16 compute-0 competent_elbakyan[240631]: {}
Jan 31 06:13:16 compute-0 systemd[1]: libpod-f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34.scope: Deactivated successfully.
Jan 31 06:13:16 compute-0 systemd[1]: libpod-f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34.scope: Consumed 1.192s CPU time.
Jan 31 06:13:16 compute-0 podman[240614]: 2026-01-31 06:13:16.27581836 +0000 UTC m=+0.949410104 container died f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d2a3035c8428d93140464077538e70e7d95072cac2656f24048e8fcfb4bd665-merged.mount: Deactivated successfully.
Jan 31 06:13:16 compute-0 podman[240614]: 2026-01-31 06:13:16.458508224 +0000 UTC m=+1.132100008 container remove f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:13:16 compute-0 systemd[1]: libpod-conmon-f4e520ed95d37b76dd3178dc77dcacc640645b6fc612557843587c234adfaa34.scope: Deactivated successfully.
Jan 31 06:13:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:16 compute-0 sudo[240534]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:13:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:13:16 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:16 compute-0 sudo[240730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:13:16 compute-0 sudo[240730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:13:16 compute-0 sudo[240730]: pam_unix(sudo:session): session closed for user root
Jan 31 06:13:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:13:18 compute-0 ceph-mon[75251]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:19 compute-0 ceph-mon[75251]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:22 compute-0 ceph-mon[75251]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:24 compute-0 ceph-mon[75251]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:26 compute-0 nova_compute[239679]: 2026-01-31 06:13:26.112 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:26 compute-0 ceph-mon[75251]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:13:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3308 writes, 14K keys, 3308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1252 writes, 5473 keys, 1252 commit groups, 1.0 writes per commit group, ingest: 8.41 MB, 0.01 MB/s
                                           Interval WAL: 1252 writes, 1252 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     24.7      0.60              0.04         6    0.100       0      0       0.0       0.0
                                             L6      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     55.7     46.1      0.78              0.10         5    0.156     19K   2196       0.0       0.0
                                            Sum      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     31.5     36.8      1.38              0.14        11    0.125     19K   2196       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     98.3    100.1      0.27              0.06         6    0.046     12K   1452       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     55.7     46.1      0.78              0.10         5    0.156     19K   2196       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     24.8      0.59              0.04         5    0.119       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 1.4 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2e66f78d0#2 capacity: 308.00 MB usage: 1.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(95,1.46 MB,0.473265%) FilterBlock(12,63.30 KB,0.0200693%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 06:13:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:28 compute-0 nova_compute[239679]: 2026-01-31 06:13:28.573 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:28 compute-0 ceph-mon[75251]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:30 compute-0 ceph-mon[75251]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.501285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011501320, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 507, "total_data_size": 2042999, "memory_usage": 2075816, "flush_reason": "Manual Compaction"}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011557339, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2001476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13541, "largest_seqno": 15103, "table_properties": {"data_size": 1994706, "index_size": 3396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 16587, "raw_average_key_size": 18, "raw_value_size": 1979128, "raw_average_value_size": 2177, "num_data_blocks": 156, "num_entries": 909, "num_filter_entries": 909, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769839864, "oldest_key_time": 1769839864, "file_creation_time": 1769840011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 56104 microseconds, and 4142 cpu microseconds.
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.557386) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2001476 bytes OK
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.557404) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.571290) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.571363) EVENT_LOG_v1 {"time_micros": 1769840011571353, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.571387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2035103, prev total WAL file size 2035103, number of live WAL files 2.
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.572280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1954KB)], [32(7509KB)]
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011572389, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9691463, "oldest_snapshot_seqno": -1}
Jan 31 06:13:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3915 keys, 7704196 bytes, temperature: kUnknown
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011632962, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7704196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7675770, "index_size": 17564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 95769, "raw_average_key_size": 24, "raw_value_size": 7602682, "raw_average_value_size": 1941, "num_data_blocks": 743, "num_entries": 3915, "num_filter_entries": 3915, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.633236) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7704196 bytes
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.647869) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.3 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.7) write-amplify(3.8) OK, records in: 4942, records dropped: 1027 output_compression: NoCompression
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.647925) EVENT_LOG_v1 {"time_micros": 1769840011647898, "job": 14, "event": "compaction_finished", "compaction_time_micros": 60634, "compaction_time_cpu_micros": 18287, "output_level": 6, "num_output_files": 1, "total_output_size": 7704196, "num_input_records": 4942, "num_output_records": 3915, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011648415, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840011649525, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.572081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.649666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.649675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.649677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.649679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:31 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:13:31.649681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:13:32 compute-0 ceph-mon[75251]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:34 compute-0 ceph-mon[75251]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:36 compute-0 ceph-mon[75251]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:13:37 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65514015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:13:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:13:37 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65514015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:13:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:13:37 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3424843896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:13:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:13:37 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3424843896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:13:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/65514015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:13:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/65514015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:13:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/3424843896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:13:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/3424843896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:13:38 compute-0 ceph-mon[75251]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:38 compute-0 podman[240757]: 2026-01-31 06:13:38.152902397 +0000 UTC m=+0.066104196 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 06:13:38 compute-0 podman[240756]: 2026-01-31 06:13:38.183939033 +0000 UTC m=+0.097174623 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 06:13:39 compute-0 nova_compute[239679]: 2026-01-31 06:13:39.510 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:39 compute-0 nova_compute[239679]: 2026-01-31 06:13:39.510 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:39 compute-0 nova_compute[239679]: 2026-01-31 06:13:39.511 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:13:39 compute-0 nova_compute[239679]: 2026-01-31 06:13:39.511 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:13:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:40 compute-0 ceph-mon[75251]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:41 compute-0 ceph-mon[75251]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:44 compute-0 ceph-mon[75251]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:13:44
Jan 31 06:13:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:13:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:13:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 06:13:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:13:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:46 compute-0 ceph-mon[75251]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.718 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.718 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.719 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.719 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.719 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.720 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.720 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.720 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:13:46 compute-0 nova_compute[239679]: 2026-01-31 06:13:46.720 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:13:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:48 compute-0 ceph-mon[75251]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:13:50.207 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:13:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:13:50.208 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:13:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:13:50.208 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:13:50 compute-0 ceph-mon[75251]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.166 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.166 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.166 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.166 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.167 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:13:52 compute-0 ceph-mon[75251]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:13:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2967500975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.732 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.931 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.933 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.933 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:13:52 compute-0 nova_compute[239679]: 2026-01-31 06:13:52.933 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:13:53 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2967500975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:13:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:54 compute-0 ceph-mon[75251]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.048 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.048 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.065 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:13:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:13:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053374024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.600 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.606 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.760 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:13:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.970 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:13:55 compute-0 nova_compute[239679]: 2026-01-31 06:13:55.971 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:13:56 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2053374024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:13:56 compute-0 ceph-mon[75251]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:13:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:58 compute-0 ceph-mon[75251]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:13:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:00 compute-0 ceph-mon[75251]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:02 compute-0 ceph-mon[75251]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:14:03 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438232944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:14:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:14:03 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/438232944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:14:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/438232944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:14:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/438232944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:14:04 compute-0 ceph-mon[75251]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:06 compute-0 ceph-mon[75251]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:08 compute-0 ceph-mon[75251]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:09 compute-0 podman[240845]: 2026-01-31 06:14:09.137976041 +0000 UTC m=+0.045176616 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:14:09 compute-0 podman[240844]: 2026-01-31 06:14:09.154087225 +0000 UTC m=+0.069210063 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:14:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:11 compute-0 ceph-mon[75251]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:12 compute-0 ceph-mon[75251]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:13 compute-0 ceph-mon[75251]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:16 compute-0 sudo[240887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:14:16 compute-0 sudo[240887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:16 compute-0 sudo[240887]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:16 compute-0 sudo[240912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:14:16 compute-0 sudo[240912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:16 compute-0 ceph-mon[75251]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:17 compute-0 sudo[240912]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:14:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:14:17 compute-0 sudo[240968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:14:17 compute-0 sudo[240968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:17 compute-0 sudo[240968]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:17 compute-0 sudo[240993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:14:17 compute-0 sudo[240993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:17 compute-0 podman[241031]: 2026-01-31 06:14:17.779256911 +0000 UTC m=+0.028769272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:14:17 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:14:17 compute-0 podman[241031]: 2026-01-31 06:14:17.935433527 +0000 UTC m=+0.184945868 container create 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:14:18 compute-0 systemd[1]: Started libpod-conmon-65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64.scope.
Jan 31 06:14:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:18 compute-0 podman[241031]: 2026-01-31 06:14:18.361420904 +0000 UTC m=+0.610933255 container init 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 06:14:18 compute-0 podman[241031]: 2026-01-31 06:14:18.372863797 +0000 UTC m=+0.622376138 container start 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:14:18 compute-0 frosty_euclid[241047]: 167 167
Jan 31 06:14:18 compute-0 systemd[1]: libpod-65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64.scope: Deactivated successfully.
Jan 31 06:14:18 compute-0 podman[241031]: 2026-01-31 06:14:18.389816345 +0000 UTC m=+0.639328716 container attach 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:14:18 compute-0 podman[241031]: 2026-01-31 06:14:18.390440703 +0000 UTC m=+0.639953044 container died 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 06:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d9774b3e43075f351f4c5e64c28bbc1d0b1f4ca6b8b93e7cd706bc9af2bba6-merged.mount: Deactivated successfully.
Jan 31 06:14:18 compute-0 podman[241031]: 2026-01-31 06:14:18.490174896 +0000 UTC m=+0.739687237 container remove 65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_euclid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:14:18 compute-0 systemd[1]: libpod-conmon-65423f29937d27616943409c2b73807c010f74d91c72976508ccf3c1c1a29c64.scope: Deactivated successfully.
Jan 31 06:14:18 compute-0 podman[241074]: 2026-01-31 06:14:18.606718904 +0000 UTC m=+0.027033184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:18 compute-0 podman[241074]: 2026-01-31 06:14:18.909598417 +0000 UTC m=+0.329912687 container create 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:14:19 compute-0 ceph-mon[75251]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:19 compute-0 systemd[1]: Started libpod-conmon-892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0.scope.
Jan 31 06:14:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:19 compute-0 podman[241074]: 2026-01-31 06:14:19.090198242 +0000 UTC m=+0.510512522 container init 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 31 06:14:19 compute-0 podman[241074]: 2026-01-31 06:14:19.096974233 +0000 UTC m=+0.517288503 container start 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:14:19 compute-0 podman[241074]: 2026-01-31 06:14:19.130079127 +0000 UTC m=+0.550393387 container attach 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:14:19 compute-0 hungry_feistel[241090]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:14:19 compute-0 hungry_feistel[241090]: --> All data devices are unavailable
Jan 31 06:14:19 compute-0 systemd[1]: libpod-892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0.scope: Deactivated successfully.
Jan 31 06:14:19 compute-0 podman[241074]: 2026-01-31 06:14:19.52578768 +0000 UTC m=+0.946101940 container died 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:14:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-867e2f62832369107e464818dfff12d90deb61f19e6fd0f1c3ecdfe965200f6a-merged.mount: Deactivated successfully.
Jan 31 06:14:19 compute-0 podman[241074]: 2026-01-31 06:14:19.803324638 +0000 UTC m=+1.223638898 container remove 892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:14:19 compute-0 sudo[240993]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:19 compute-0 systemd[1]: libpod-conmon-892411532af437b86f4547516e2ec3a17ad0b8b7c9dd87f2907233afa22719d0.scope: Deactivated successfully.
Jan 31 06:14:19 compute-0 sudo[241123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:14:19 compute-0 sudo[241123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:19 compute-0 sudo[241123]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:19 compute-0 sudo[241148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:14:19 compute-0 sudo[241148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:20 compute-0 ceph-mon[75251]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.283372453 +0000 UTC m=+0.092935012 container create d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.217663974 +0000 UTC m=+0.027226543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:20 compute-0 systemd[1]: Started libpod-conmon-d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835.scope.
Jan 31 06:14:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.411820677 +0000 UTC m=+0.221383236 container init d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.417073244 +0000 UTC m=+0.226635803 container start d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:14:20 compute-0 thirsty_sinoussi[241200]: 167 167
Jan 31 06:14:20 compute-0 systemd[1]: libpod-d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835.scope: Deactivated successfully.
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.42513948 +0000 UTC m=+0.234702079 container attach d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.425712056 +0000 UTC m=+0.235274625 container died d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:14:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2edbb4e054677309f594d0bc01c66dbb53de548ebf3b21fa0710fd72e8494f-merged.mount: Deactivated successfully.
Jan 31 06:14:20 compute-0 podman[241184]: 2026-01-31 06:14:20.497028401 +0000 UTC m=+0.306590950 container remove d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 06:14:20 compute-0 systemd[1]: libpod-conmon-d9ef71f10f4cd4b29b42473e0d87d290ef14b0927081fc6aad1f9e38f93e7835.scope: Deactivated successfully.
Jan 31 06:14:20 compute-0 podman[241223]: 2026-01-31 06:14:20.629652313 +0000 UTC m=+0.045905616 container create 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:14:20 compute-0 systemd[1]: Started libpod-conmon-2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1.scope.
Jan 31 06:14:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ce751fa81ebe0daf353276988ab2bcb1f9f111f7d5db8597998a0981fcadf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ce751fa81ebe0daf353276988ab2bcb1f9f111f7d5db8597998a0981fcadf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ce751fa81ebe0daf353276988ab2bcb1f9f111f7d5db8597998a0981fcadf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/704ce751fa81ebe0daf353276988ab2bcb1f9f111f7d5db8597998a0981fcadf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:20 compute-0 podman[241223]: 2026-01-31 06:14:20.602181774 +0000 UTC m=+0.018435107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:20 compute-0 podman[241223]: 2026-01-31 06:14:20.712485841 +0000 UTC m=+0.128739174 container init 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:14:20 compute-0 podman[241223]: 2026-01-31 06:14:20.719363153 +0000 UTC m=+0.135616456 container start 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:14:20 compute-0 podman[241223]: 2026-01-31 06:14:20.72924645 +0000 UTC m=+0.145499773 container attach 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:14:20 compute-0 objective_shaw[241239]: {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     "0": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "devices": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "/dev/loop3"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             ],
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_name": "ceph_lv0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_size": "21470642176",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "name": "ceph_lv0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "tags": {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_name": "ceph",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.crush_device_class": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.encrypted": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.objectstore": "bluestore",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_id": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.vdo": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.with_tpm": "0"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             },
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "vg_name": "ceph_vg0"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         }
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     ],
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     "1": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "devices": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "/dev/loop4"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             ],
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_name": "ceph_lv1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_size": "21470642176",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "name": "ceph_lv1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "tags": {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_name": "ceph",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.crush_device_class": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.encrypted": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.objectstore": "bluestore",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_id": "1",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.vdo": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.with_tpm": "0"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             },
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "vg_name": "ceph_vg1"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         }
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     ],
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     "2": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "devices": [
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "/dev/loop5"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             ],
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_name": "ceph_lv2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_size": "21470642176",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "name": "ceph_lv2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "tags": {
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.cluster_name": "ceph",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.crush_device_class": "",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.encrypted": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.objectstore": "bluestore",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osd_id": "2",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.vdo": "0",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:                 "ceph.with_tpm": "0"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             },
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "type": "block",
Jan 31 06:14:20 compute-0 objective_shaw[241239]:             "vg_name": "ceph_vg2"
Jan 31 06:14:20 compute-0 objective_shaw[241239]:         }
Jan 31 06:14:20 compute-0 objective_shaw[241239]:     ]
Jan 31 06:14:20 compute-0 objective_shaw[241239]: }
Jan 31 06:14:21 compute-0 systemd[1]: libpod-2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1.scope: Deactivated successfully.
Jan 31 06:14:21 compute-0 podman[241223]: 2026-01-31 06:14:21.006056267 +0000 UTC m=+0.422309580 container died 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 06:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-704ce751fa81ebe0daf353276988ab2bcb1f9f111f7d5db8597998a0981fcadf-merged.mount: Deactivated successfully.
Jan 31 06:14:21 compute-0 podman[241223]: 2026-01-31 06:14:21.506423721 +0000 UTC m=+0.922677034 container remove 2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_shaw, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:14:21 compute-0 systemd[1]: libpod-conmon-2c4489450bdf1e61dfeced4f2fa68932ab732afd5a62d0d5617bd24d1f3f2ab1.scope: Deactivated successfully.
Jan 31 06:14:21 compute-0 sudo[241148]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:21 compute-0 sudo[241262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:14:21 compute-0 sudo[241262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:21 compute-0 sudo[241262]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:21 compute-0 sudo[241287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:14:21 compute-0 sudo[241287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:21 compute-0 podman[241325]: 2026-01-31 06:14:21.936618571 +0000 UTC m=+0.083764385 container create e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 06:14:21 compute-0 podman[241325]: 2026-01-31 06:14:21.869843272 +0000 UTC m=+0.016989116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:22 compute-0 systemd[1]: Started libpod-conmon-e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1.scope.
Jan 31 06:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:22 compute-0 podman[241325]: 2026-01-31 06:14:22.219927909 +0000 UTC m=+0.367073753 container init e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:14:22 compute-0 podman[241325]: 2026-01-31 06:14:22.225889066 +0000 UTC m=+0.373034890 container start e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:14:22 compute-0 recursing_johnson[241341]: 167 167
Jan 31 06:14:22 compute-0 systemd[1]: libpod-e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1.scope: Deactivated successfully.
Jan 31 06:14:22 compute-0 podman[241325]: 2026-01-31 06:14:22.263841118 +0000 UTC m=+0.410986962 container attach e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 06:14:22 compute-0 podman[241325]: 2026-01-31 06:14:22.264399884 +0000 UTC m=+0.411545708 container died e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6182bb26d78244c6e7ab0df2cbe461ed5e90b97b68f580d379c5243e41cd08a8-merged.mount: Deactivated successfully.
Jan 31 06:14:22 compute-0 podman[241325]: 2026-01-31 06:14:22.4229358 +0000 UTC m=+0.570081624 container remove e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_johnson, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:14:22 compute-0 systemd[1]: libpod-conmon-e7722862bd72946b0ac86edfaf0139597877d2556d94e8a02be84c2dd72a06f1.scope: Deactivated successfully.
Jan 31 06:14:22 compute-0 podman[241366]: 2026-01-31 06:14:22.546883759 +0000 UTC m=+0.038039165 container create 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:14:22 compute-0 systemd[1]: Started libpod-conmon-388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c.scope.
Jan 31 06:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda90693084dcb2a253f0ff93a1cdaa50e3ceed9348075c610bfd1c28bdf6b5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda90693084dcb2a253f0ff93a1cdaa50e3ceed9348075c610bfd1c28bdf6b5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda90693084dcb2a253f0ff93a1cdaa50e3ceed9348075c610bfd1c28bdf6b5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda90693084dcb2a253f0ff93a1cdaa50e3ceed9348075c610bfd1c28bdf6b5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:14:22 compute-0 podman[241366]: 2026-01-31 06:14:22.530417219 +0000 UTC m=+0.021572665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:14:22 compute-0 podman[241366]: 2026-01-31 06:14:22.701301481 +0000 UTC m=+0.192456927 container init 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:14:22 compute-0 podman[241366]: 2026-01-31 06:14:22.706546368 +0000 UTC m=+0.197701774 container start 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 06:14:22 compute-0 podman[241366]: 2026-01-31 06:14:22.776767203 +0000 UTC m=+0.267922649 container attach 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:14:22 compute-0 ceph-mon[75251]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:23 compute-0 lvm[241461]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:14:23 compute-0 lvm[241461]: VG ceph_vg1 finished
Jan 31 06:14:23 compute-0 lvm[241460]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:14:23 compute-0 lvm[241460]: VG ceph_vg0 finished
Jan 31 06:14:23 compute-0 lvm[241463]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:14:23 compute-0 lvm[241463]: VG ceph_vg2 finished
Jan 31 06:14:23 compute-0 cranky_mendel[241382]: {}
Jan 31 06:14:23 compute-0 systemd[1]: libpod-388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c.scope: Deactivated successfully.
Jan 31 06:14:23 compute-0 systemd[1]: libpod-388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c.scope: Consumed 1.057s CPU time.
Jan 31 06:14:23 compute-0 podman[241366]: 2026-01-31 06:14:23.439033438 +0000 UTC m=+0.930188884 container died 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-eda90693084dcb2a253f0ff93a1cdaa50e3ceed9348075c610bfd1c28bdf6b5a-merged.mount: Deactivated successfully.
Jan 31 06:14:23 compute-0 podman[241366]: 2026-01-31 06:14:23.476552448 +0000 UTC m=+0.967707864 container remove 388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mendel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 06:14:23 compute-0 systemd[1]: libpod-conmon-388a0104c571538abb35d707304deecc7a8c62bd88dcc4af8654047f31e80e9c.scope: Deactivated successfully.
Jan 31 06:14:23 compute-0 sudo[241287]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:14:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:14:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:23 compute-0 sudo[241477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:14:23 compute-0 sudo[241477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:14:23 compute-0 sudo[241477]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:14:24 compute-0 ceph-mon[75251]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:26 compute-0 ceph-mon[75251]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:26 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:29 compute-0 ceph-mon[75251]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:30 compute-0 ceph-mon[75251]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:32 compute-0 ceph-mon[75251]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:34 compute-0 ceph-mon[75251]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:36 compute-0 ceph-mon[75251]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:38 compute-0 ceph-mon[75251]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:14:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5852 writes, 25K keys, 5852 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5852 writes, 985 syncs, 5.94 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:14:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:40 compute-0 podman[241502]: 2026-01-31 06:14:40.15293452 +0000 UTC m=+0.072685244 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:14:40 compute-0 podman[241503]: 2026-01-31 06:14:40.155013059 +0000 UTC m=+0.075226336 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:14:40 compute-0 ceph-mon[75251]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:41 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:43 compute-0 ceph-mon[75251]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:14:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 8536 writes, 35K keys, 8536 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8536 writes, 1745 syncs, 4.89 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:14:44 compute-0 ceph-mon[75251]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:14:44
Jan 31 06:14:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:14:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:14:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', 'images']
Jan 31 06:14:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:14:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:46 compute-0 ceph-mon[75251]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:48 compute-0 ceph-mon[75251]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:50 compute-0 sshd-session[241548]: Connection closed by 14.63.166.251 port 51721 [preauth]
Jan 31 06:14:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:14:50.209 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:14:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:14:50.209 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:14:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:14:50.210 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:14:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:14:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5753 writes, 25K keys, 5753 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5753 writes, 900 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:14:51 compute-0 ceph-mon[75251]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:52 compute-0 ceph-mon[75251]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:54 compute-0 ceph-mon[75251]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:54 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Check health
Jan 31 06:14:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:14:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302207355' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:14:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:14:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/302207355' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:14:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/302207355' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:14:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/302207355' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:14:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:14:55 compute-0 nova_compute[239679]: 2026-01-31 06:14:55.963 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:55 compute-0 nova_compute[239679]: 2026-01-31 06:14:55.964 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.211 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.211 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.211 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.903 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.905 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.905 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.906 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.906 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.906 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.907 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.907 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:14:56 compute-0 nova_compute[239679]: 2026-01-31 06:14:56.907 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:14:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.210 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.211 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.212 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.212 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.213 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:14:57 compute-0 ceph-mon[75251]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:14:57 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203429062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:14:57 compute-0 nova_compute[239679]: 2026-01-31 06:14:57.958 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.090 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.092 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.092 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.092 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.911 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.912 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:14:58 compute-0 nova_compute[239679]: 2026-01-31 06:14:58.934 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:14:59 compute-0 ceph-mon[75251]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:14:59 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2203429062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:14:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:14:59 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584387859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:14:59 compute-0 nova_compute[239679]: 2026-01-31 06:14:59.415 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:14:59 compute-0 nova_compute[239679]: 2026-01-31 06:14:59.419 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:14:59 compute-0 nova_compute[239679]: 2026-01-31 06:14:59.535 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:14:59 compute-0 nova_compute[239679]: 2026-01-31 06:14:59.538 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:14:59 compute-0 nova_compute[239679]: 2026-01-31 06:14:59.539 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:14:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:00 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2584387859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:15:00 compute-0 ceph-mon[75251]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:03 compute-0 ceph-mon[75251]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:04 compute-0 ceph-mon[75251]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:06 compute-0 ceph-mon[75251]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:08 compute-0 ceph-mon[75251]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:10 compute-0 ceph-mon[75251]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:11 compute-0 podman[241596]: 2026-01-31 06:15:11.11774778 +0000 UTC m=+0.042639944 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 06:15:11 compute-0 podman[241595]: 2026-01-31 06:15:11.133737988 +0000 UTC m=+0.062118930 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 06:15:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:12 compute-0 ceph-mon[75251]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:14 compute-0 sshd[182308]: Timeout before authentication for connection from 106.54.176.158 to 38.102.83.30, pid = 240496
Jan 31 06:15:14 compute-0 ceph-mon[75251]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:16 compute-0 ceph-mon[75251]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:18 compute-0 ceph-mon[75251]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:20 compute-0 ceph-mon[75251]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:22 compute-0 ceph-mon[75251]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:23 compute-0 sudo[241640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:15:23 compute-0 sudo[241640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:23 compute-0 sudo[241640]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:23 compute-0 sudo[241665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 06:15:23 compute-0 sudo[241665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:24 compute-0 sudo[241665]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:15:24 compute-0 ceph-mon[75251]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:15:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:24 compute-0 sudo[241710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:15:24 compute-0 sudo[241710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:24 compute-0 sudo[241710]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:24 compute-0 sudo[241735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:15:24 compute-0 sudo[241735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:24 compute-0 sudo[241735]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:15:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:15:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:15:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:15:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:15:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:15:25 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:15:25 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:15:25 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:15:25 compute-0 sudo[241791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:15:25 compute-0 sudo[241791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:25 compute-0 sudo[241791]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:25 compute-0 sudo[241816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:15:25 compute-0 sudo[241816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:25 compute-0 podman[241853]: 2026-01-31 06:15:25.48069978 +0000 UTC m=+0.023087647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:15:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:25 compute-0 podman[241853]: 2026-01-31 06:15:25.755225383 +0000 UTC m=+0.297613210 container create 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 06:15:26 compute-0 systemd[1]: Started libpod-conmon-1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1.scope.
Jan 31 06:15:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:26 compute-0 podman[241853]: 2026-01-31 06:15:26.624724858 +0000 UTC m=+1.167112685 container init 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:15:26 compute-0 podman[241853]: 2026-01-31 06:15:26.631531808 +0000 UTC m=+1.173919636 container start 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:15:26 compute-0 vigorous_wright[241869]: 167 167
Jan 31 06:15:26 compute-0 systemd[1]: libpod-1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1.scope: Deactivated successfully.
Jan 31 06:15:26 compute-0 ceph-mon[75251]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:26 compute-0 podman[241853]: 2026-01-31 06:15:26.840414053 +0000 UTC m=+1.382801870 container attach 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:15:26 compute-0 podman[241853]: 2026-01-31 06:15:26.840823415 +0000 UTC m=+1.383211232 container died 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:15:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a45a402712908d6b47ea2cd4804c76e68b554a6b43c8363685e40f7386c8351-merged.mount: Deactivated successfully.
Jan 31 06:15:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:28 compute-0 ceph-mon[75251]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:29 compute-0 podman[241853]: 2026-01-31 06:15:29.192929613 +0000 UTC m=+3.735317470 container remove 1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:15:29 compute-0 systemd[1]: libpod-conmon-1d7aaefdbfe89a16cfb81761f324c0dbae90c6b81839dbee36230175c6fc08c1.scope: Deactivated successfully.
Jan 31 06:15:29 compute-0 podman[241893]: 2026-01-31 06:15:29.342433068 +0000 UTC m=+0.022380688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:29 compute-0 podman[241893]: 2026-01-31 06:15:29.582916238 +0000 UTC m=+0.262863818 container create 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:15:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:29 compute-0 systemd[1]: Started libpod-conmon-859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43.scope.
Jan 31 06:15:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:30 compute-0 podman[241893]: 2026-01-31 06:15:30.063661492 +0000 UTC m=+0.743609122 container init 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:15:30 compute-0 podman[241893]: 2026-01-31 06:15:30.072839679 +0000 UTC m=+0.752787309 container start 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:15:30 compute-0 blissful_goldwasser[241910]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:15:30 compute-0 blissful_goldwasser[241910]: --> All data devices are unavailable
Jan 31 06:15:30 compute-0 systemd[1]: libpod-859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43.scope: Deactivated successfully.
Jan 31 06:15:30 compute-0 podman[241893]: 2026-01-31 06:15:30.74534613 +0000 UTC m=+1.425293760 container attach 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:15:30 compute-0 podman[241893]: 2026-01-31 06:15:30.746378359 +0000 UTC m=+1.426325949 container died 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:15:30 compute-0 ceph-mon[75251]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc29898ef2da2f60e3c84bb982fa5855e80bd2d8dd9c5327ee3ed1fb9544f5a6-merged.mount: Deactivated successfully.
Jan 31 06:15:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:32 compute-0 podman[241893]: 2026-01-31 06:15:32.188372557 +0000 UTC m=+2.868320167 container remove 859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_goldwasser, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:15:32 compute-0 sudo[241816]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:32 compute-0 systemd[1]: libpod-conmon-859ce319617df1a02ec091ba35161df20e7b8c478dc3e040afb43729a30c5b43.scope: Deactivated successfully.
Jan 31 06:15:32 compute-0 sudo[241941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:15:32 compute-0 sudo[241941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:32 compute-0 sudo[241941]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:32 compute-0 sudo[241966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:15:32 compute-0 sudo[241966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:32 compute-0 podman[242002]: 2026-01-31 06:15:32.556733566 +0000 UTC m=+0.018265362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:32 compute-0 podman[242002]: 2026-01-31 06:15:32.829539641 +0000 UTC m=+0.291071357 container create 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:15:32 compute-0 ceph-mon[75251]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:33 compute-0 systemd[1]: Started libpod-conmon-377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64.scope.
Jan 31 06:15:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:33 compute-0 podman[242002]: 2026-01-31 06:15:33.610182689 +0000 UTC m=+1.071714485 container init 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:15:33 compute-0 podman[242002]: 2026-01-31 06:15:33.618034118 +0000 UTC m=+1.079565844 container start 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 06:15:33 compute-0 reverent_northcutt[242018]: 167 167
Jan 31 06:15:33 compute-0 systemd[1]: libpod-377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64.scope: Deactivated successfully.
Jan 31 06:15:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:33 compute-0 podman[242002]: 2026-01-31 06:15:33.884218006 +0000 UTC m=+1.345749752 container attach 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:15:33 compute-0 podman[242002]: 2026-01-31 06:15:33.88470216 +0000 UTC m=+1.346233916 container died 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:15:34 compute-0 ceph-mon[75251]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcf424f2a65c159f16d5aea8242274a000085e537e0f012442917530d84da928-merged.mount: Deactivated successfully.
Jan 31 06:15:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:35 compute-0 podman[242002]: 2026-01-31 06:15:35.887607354 +0000 UTC m=+3.349139080 container remove 377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:15:35 compute-0 systemd[1]: libpod-conmon-377940633ed35b7897f303bf244e11ab0f9fa9da2c2ae1ff6f3e69d7e109ba64.scope: Deactivated successfully.
Jan 31 06:15:36 compute-0 ceph-mon[75251]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:36 compute-0 podman[242043]: 2026-01-31 06:15:36.049453444 +0000 UTC m=+0.022549222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:36 compute-0 podman[242043]: 2026-01-31 06:15:36.536525265 +0000 UTC m=+0.509621063 container create 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:15:36 compute-0 systemd[1]: Started libpod-conmon-31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61.scope.
Jan 31 06:15:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749622883c05b1a1b8a06f17a98daa869ac41a61b16397c044b6d4dfcbde80c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749622883c05b1a1b8a06f17a98daa869ac41a61b16397c044b6d4dfcbde80c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749622883c05b1a1b8a06f17a98daa869ac41a61b16397c044b6d4dfcbde80c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749622883c05b1a1b8a06f17a98daa869ac41a61b16397c044b6d4dfcbde80c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:37 compute-0 podman[242043]: 2026-01-31 06:15:37.059749178 +0000 UTC m=+1.032845006 container init 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:15:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:37 compute-0 podman[242043]: 2026-01-31 06:15:37.067076713 +0000 UTC m=+1.040172531 container start 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 06:15:37 compute-0 podman[242043]: 2026-01-31 06:15:37.173745819 +0000 UTC m=+1.146841607 container attach 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]: {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     "0": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "devices": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "/dev/loop3"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             ],
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_name": "ceph_lv0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_size": "21470642176",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "name": "ceph_lv0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "tags": {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_name": "ceph",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.crush_device_class": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.encrypted": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.objectstore": "bluestore",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_id": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.vdo": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.with_tpm": "0"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             },
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "vg_name": "ceph_vg0"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         }
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     ],
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     "1": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "devices": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "/dev/loop4"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             ],
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_name": "ceph_lv1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_size": "21470642176",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "name": "ceph_lv1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "tags": {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_name": "ceph",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.crush_device_class": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.encrypted": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.objectstore": "bluestore",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_id": "1",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.vdo": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.with_tpm": "0"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             },
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "vg_name": "ceph_vg1"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         }
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     ],
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     "2": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "devices": [
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "/dev/loop5"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             ],
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_name": "ceph_lv2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_size": "21470642176",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "name": "ceph_lv2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "tags": {
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.cluster_name": "ceph",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.crush_device_class": "",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.encrypted": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.objectstore": "bluestore",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osd_id": "2",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.vdo": "0",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:                 "ceph.with_tpm": "0"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             },
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "type": "block",
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:             "vg_name": "ceph_vg2"
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:         }
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]:     ]
Jan 31 06:15:37 compute-0 fervent_wescoff[242060]: }
Jan 31 06:15:37 compute-0 systemd[1]: libpod-31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61.scope: Deactivated successfully.
Jan 31 06:15:37 compute-0 podman[242043]: 2026-01-31 06:15:37.316658518 +0000 UTC m=+1.289754296 container died 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:15:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-749622883c05b1a1b8a06f17a98daa869ac41a61b16397c044b6d4dfcbde80c8-merged.mount: Deactivated successfully.
Jan 31 06:15:38 compute-0 ceph-mon[75251]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:38 compute-0 podman[242043]: 2026-01-31 06:15:38.877433299 +0000 UTC m=+2.850529087 container remove 31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:15:38 compute-0 systemd[1]: libpod-conmon-31b5df9875b2b22448c7dbdaacbf58103762b10b87c796befef389b7e9387d61.scope: Deactivated successfully.
Jan 31 06:15:38 compute-0 sudo[241966]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:39 compute-0 sudo[242082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:15:39 compute-0 sudo[242082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:39 compute-0 sudo[242082]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:39 compute-0 sudo[242107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:15:39 compute-0 sudo[242107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:39 compute-0 podman[242144]: 2026-01-31 06:15:39.323064021 +0000 UTC m=+0.023361445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:39 compute-0 podman[242144]: 2026-01-31 06:15:39.512396539 +0000 UTC m=+0.212693893 container create 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 06:15:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:39 compute-0 systemd[1]: Started libpod-conmon-80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a.scope.
Jan 31 06:15:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:39 compute-0 podman[242144]: 2026-01-31 06:15:39.906057277 +0000 UTC m=+0.606354631 container init 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:15:39 compute-0 podman[242144]: 2026-01-31 06:15:39.913640569 +0000 UTC m=+0.613937903 container start 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 06:15:39 compute-0 mystifying_morse[242160]: 167 167
Jan 31 06:15:39 compute-0 systemd[1]: libpod-80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a.scope: Deactivated successfully.
Jan 31 06:15:40 compute-0 podman[242144]: 2026-01-31 06:15:40.15914007 +0000 UTC m=+0.859437454 container attach 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:15:40 compute-0 podman[242144]: 2026-01-31 06:15:40.161607219 +0000 UTC m=+0.861904573 container died 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-226de543394e364cf36b29407618b8060ee6e1c3c2d99876e0d96e0849e2037f-merged.mount: Deactivated successfully.
Jan 31 06:15:40 compute-0 podman[242144]: 2026-01-31 06:15:40.798836553 +0000 UTC m=+1.499133887 container remove 80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:15:40 compute-0 ceph-mon[75251]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:40 compute-0 systemd[1]: libpod-conmon-80297c36280719a6ebb2c2f61d93b06467c3f355f25fe19eff44f0a3e0cc629a.scope: Deactivated successfully.
Jan 31 06:15:41 compute-0 podman[242185]: 2026-01-31 06:15:41.024744674 +0000 UTC m=+0.114534505 container create db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:15:41 compute-0 podman[242185]: 2026-01-31 06:15:40.937202325 +0000 UTC m=+0.026992176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:15:41 compute-0 systemd[1]: Started libpod-conmon-db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca.scope.
Jan 31 06:15:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443db4746e6b603e8c5ab14b87d0a1bca8bdf718fdea0e4d321c2716504b653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443db4746e6b603e8c5ab14b87d0a1bca8bdf718fdea0e4d321c2716504b653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443db4746e6b603e8c5ab14b87d0a1bca8bdf718fdea0e4d321c2716504b653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4443db4746e6b603e8c5ab14b87d0a1bca8bdf718fdea0e4d321c2716504b653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:15:41 compute-0 podman[242185]: 2026-01-31 06:15:41.323985999 +0000 UTC m=+0.413775830 container init db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:15:41 compute-0 podman[242185]: 2026-01-31 06:15:41.33081606 +0000 UTC m=+0.420605881 container start db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:15:41 compute-0 podman[242185]: 2026-01-31 06:15:41.487988209 +0000 UTC m=+0.577778040 container attach db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:15:41 compute-0 podman[242205]: 2026-01-31 06:15:41.587691939 +0000 UTC m=+0.379317067 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:15:41 compute-0 podman[242203]: 2026-01-31 06:15:41.621389672 +0000 UTC m=+0.413705859 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 06:15:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:42 compute-0 lvm[242323]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:15:42 compute-0 lvm[242323]: VG ceph_vg0 finished
Jan 31 06:15:42 compute-0 lvm[242326]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:15:42 compute-0 lvm[242326]: VG ceph_vg1 finished
Jan 31 06:15:42 compute-0 lvm[242328]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:15:42 compute-0 lvm[242328]: VG ceph_vg2 finished
Jan 31 06:15:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:42 compute-0 ceph-mon[75251]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:42 compute-0 happy_jemison[242202]: {}
Jan 31 06:15:42 compute-0 systemd[1]: libpod-db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca.scope: Deactivated successfully.
Jan 31 06:15:42 compute-0 systemd[1]: libpod-db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca.scope: Consumed 1.034s CPU time.
Jan 31 06:15:42 compute-0 podman[242185]: 2026-01-31 06:15:42.203330249 +0000 UTC m=+1.293120080 container died db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 06:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4443db4746e6b603e8c5ab14b87d0a1bca8bdf718fdea0e4d321c2716504b653-merged.mount: Deactivated successfully.
Jan 31 06:15:43 compute-0 podman[242185]: 2026-01-31 06:15:43.292232224 +0000 UTC m=+2.382022095 container remove db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:15:43 compute-0 sudo[242107]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:15:43 compute-0 systemd[1]: libpod-conmon-db0679875648da3e586c7994b02cc3a190f908d8e498bd86d13a51f60ff8a6ca.scope: Deactivated successfully.
Jan 31 06:15:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:15:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:43 compute-0 sudo[242344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:15:43 compute-0 sudo[242344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:15:43 compute-0 sudo[242344]: pam_unix(sudo:session): session closed for user root
Jan 31 06:15:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:15:44
Jan 31 06:15:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:15:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:15:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Jan 31 06:15:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:15:44 compute-0 ceph-mon[75251]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:15:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:46 compute-0 ceph-mon[75251]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:48 compute-0 ceph-mon[75251]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:50 compute-0 ceph-mon[75251]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:15:50.209 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:15:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:15:50.210 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:15:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:15:50.210 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:15:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:52 compute-0 ceph-mon[75251]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:54 compute-0 ceph-mon[75251]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:15:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/416482110' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:15:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:15:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/416482110' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:15:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/416482110' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:15:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/416482110' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.539878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840155539948, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1388, "num_deletes": 251, "total_data_size": 2207537, "memory_usage": 2247264, "flush_reason": "Manual Compaction"}
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840155596520, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2164965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15104, "largest_seqno": 16491, "table_properties": {"data_size": 2158488, "index_size": 3676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13475, "raw_average_key_size": 19, "raw_value_size": 2145443, "raw_average_value_size": 3132, "num_data_blocks": 168, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840012, "oldest_key_time": 1769840012, "file_creation_time": 1769840155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 56717 microseconds, and 5465 cpu microseconds.
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.596594) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2164965 bytes OK
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.596626) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.747931) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.747981) EVENT_LOG_v1 {"time_micros": 1769840155747972, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.748005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2201362, prev total WAL file size 2201362, number of live WAL files 2.
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.748748) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2114KB)], [35(7523KB)]
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840155748796, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9869161, "oldest_snapshot_seqno": -1}
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:15:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4086 keys, 8058564 bytes, temperature: kUnknown
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840155924823, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8058564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8028754, "index_size": 18500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 99817, "raw_average_key_size": 24, "raw_value_size": 7952379, "raw_average_value_size": 1946, "num_data_blocks": 781, "num_entries": 4086, "num_filter_entries": 4086, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840155, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:15:55 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.925098) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8058564 bytes
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.034266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.0 rd, 45.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.3 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4600, records dropped: 514 output_compression: NoCompression
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.034315) EVENT_LOG_v1 {"time_micros": 1769840156034292, "job": 16, "event": "compaction_finished", "compaction_time_micros": 176115, "compaction_time_cpu_micros": 16550, "output_level": 6, "num_output_files": 1, "total_output_size": 8058564, "num_input_records": 4600, "num_output_records": 4086, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840156034937, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840156035925, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:55.748587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.036035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.036043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.036047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.036051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:15:56.036055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:15:56 compute-0 ceph-mon[75251]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:15:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:57 compute-0 ceph-mon[75251]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:15:59 compute-0 nova_compute[239679]: 2026-01-31 06:15:59.541 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:15:59 compute-0 nova_compute[239679]: 2026-01-31 06:15:59.542 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:15:59 compute-0 nova_compute[239679]: 2026-01-31 06:15:59.542 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:15:59 compute-0 nova_compute[239679]: 2026-01-31 06:15:59.542 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:15:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:00 compute-0 ceph-mon[75251]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:02 compute-0 ceph-mon[75251]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.497 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.498 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.499 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.499 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.499 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.500 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.500 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.500 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:16:03 compute-0 nova_compute[239679]: 2026-01-31 06:16:03.500 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:16:04 compute-0 ceph-mon[75251]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:16:04 compute-0 nova_compute[239679]: 2026-01-31 06:16:04.636 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:16:04 compute-0 nova_compute[239679]: 2026-01-31 06:16:04.637 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:16:04 compute-0 nova_compute[239679]: 2026-01-31 06:16:04.637 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:16:04 compute-0 nova_compute[239679]: 2026-01-31 06:16:04.637 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:16:04 compute-0 nova_compute[239679]: 2026-01-31 06:16:04.637 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:16:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:16:05 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055307613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:05 compute-0 nova_compute[239679]: 2026-01-31 06:16:05.239 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:16:05 compute-0 nova_compute[239679]: 2026-01-31 06:16:05.390 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:16:05 compute-0 nova_compute[239679]: 2026-01-31 06:16:05.391 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:16:05 compute-0 nova_compute[239679]: 2026-01-31 06:16:05.391 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:16:05 compute-0 nova_compute[239679]: 2026-01-31 06:16:05.392 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:16:05 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4055307613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:16:06 compute-0 ceph-mon[75251]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 31 06:16:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:16:08 compute-0 ceph-mon[75251]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Jan 31 06:16:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 06:16:10 compute-0 ceph-mon[75251]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 31 06:16:11 compute-0 nova_compute[239679]: 2026-01-31 06:16:11.214 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:16:11 compute-0 nova_compute[239679]: 2026-01-31 06:16:11.214 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:16:11 compute-0 nova_compute[239679]: 2026-01-31 06:16:11.233 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:16:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Jan 31 06:16:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:16:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32909064' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:11 compute-0 nova_compute[239679]: 2026-01-31 06:16:11.724 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:16:11 compute-0 nova_compute[239679]: 2026-01-31 06:16:11.733 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:16:11 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/32909064' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:12 compute-0 podman[242415]: 2026-01-31 06:16:12.141898491 +0000 UTC m=+0.053566720 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 06:16:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:12 compute-0 podman[242414]: 2026-01-31 06:16:12.164612567 +0000 UTC m=+0.076397939 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:16:12 compute-0 ceph-mon[75251]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Jan 31 06:16:13 compute-0 nova_compute[239679]: 2026-01-31 06:16:13.098 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:16:13 compute-0 nova_compute[239679]: 2026-01-31 06:16:13.100 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:16:13 compute-0 nova_compute[239679]: 2026-01-31 06:16:13.101 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:16:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:16:13 compute-0 ceph-mon[75251]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 31 06:16:16 compute-0 ceph-mon[75251]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 31 06:16:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:16:18 compute-0 ceph-mon[75251]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 06:16:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 31 06:16:20 compute-0 ceph-mon[75251]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 31 06:16:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 31 06:16:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:22 compute-0 ceph-mon[75251]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 31 06:16:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 06:16:23 compute-0 ceph-mon[75251]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 06:16:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:26 compute-0 ceph-mon[75251]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:28 compute-0 ceph-mon[75251]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:30 compute-0 ceph-mon[75251]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:32 compute-0 ceph-mon[75251]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:34 compute-0 ceph-mon[75251]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:36 compute-0 ceph-mon[75251]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:37 compute-0 ceph-mon[75251]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:40 compute-0 ceph-mon[75251]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:42 compute-0 ceph-mon[75251]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:43 compute-0 podman[242455]: 2026-01-31 06:16:43.135446207 +0000 UTC m=+0.054624191 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:16:43 compute-0 podman[242454]: 2026-01-31 06:16:43.202965531 +0000 UTC m=+0.125447518 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:16:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:43 compute-0 sudo[242498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:16:43 compute-0 sudo[242498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:43 compute-0 sudo[242498]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:44 compute-0 sudo[242523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:16:44 compute-0 sudo[242523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:16:44
Jan 31 06:16:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:16:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:16:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Jan 31 06:16:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:16:44 compute-0 sudo[242523]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:16:44 compute-0 sudo[242579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:16:44 compute-0 sudo[242579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:44 compute-0 sudo[242579]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:44 compute-0 sudo[242604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:16:44 compute-0 sudo[242604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:44 compute-0 ceph-mon[75251]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:16:44 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.016797314 +0000 UTC m=+0.054150618 container create e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:16:45 compute-0 systemd[1]: Started libpod-conmon-e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7.scope.
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:44.990476912 +0000 UTC m=+0.027830236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.108711085 +0000 UTC m=+0.146064419 container init e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.119127469 +0000 UTC m=+0.156480793 container start e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 06:16:45 compute-0 cranky_villani[242658]: 167 167
Jan 31 06:16:45 compute-0 systemd[1]: libpod-e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7.scope: Deactivated successfully.
Jan 31 06:16:45 compute-0 conmon[242658]: conmon e30f7a22094d77b7cdb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7.scope/container/memory.events
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.129475461 +0000 UTC m=+0.166828815 container attach e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.131711724 +0000 UTC m=+0.169065058 container died e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6530fd78e481cc540c3f81e1d3774d83525edb18b7b6982cb699fcf885bf9c-merged.mount: Deactivated successfully.
Jan 31 06:16:45 compute-0 podman[242641]: 2026-01-31 06:16:45.202437068 +0000 UTC m=+0.239790362 container remove e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:16:45 compute-0 systemd[1]: libpod-conmon-e30f7a22094d77b7cdb8eac45f07539daaaf57a5bec38e748a5d5cb97e1927f7.scope: Deactivated successfully.
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:16:45 compute-0 podman[242684]: 2026-01-31 06:16:45.387365271 +0000 UTC m=+0.064779277 container create 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:16:45 compute-0 systemd[1]: Started libpod-conmon-7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54.scope.
Jan 31 06:16:45 compute-0 podman[242684]: 2026-01-31 06:16:45.357771397 +0000 UTC m=+0.035185433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:16:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:45 compute-0 podman[242684]: 2026-01-31 06:16:45.53913096 +0000 UTC m=+0.216544926 container init 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:16:45 compute-0 podman[242684]: 2026-01-31 06:16:45.545420488 +0000 UTC m=+0.222834444 container start 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:16:45 compute-0 podman[242684]: 2026-01-31 06:16:45.593917365 +0000 UTC m=+0.271331321 container attach 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:16:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:46 compute-0 gifted_driscoll[242701]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:16:46 compute-0 gifted_driscoll[242701]: --> All data devices are unavailable
Jan 31 06:16:46 compute-0 systemd[1]: libpod-7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54.scope: Deactivated successfully.
Jan 31 06:16:46 compute-0 podman[242684]: 2026-01-31 06:16:46.072504529 +0000 UTC m=+0.749918545 container died 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c3bae03091aceb71aed6427183f45456994d351273ff9d98f9ba2f29fabbabb-merged.mount: Deactivated successfully.
Jan 31 06:16:46 compute-0 podman[242684]: 2026-01-31 06:16:46.384869397 +0000 UTC m=+1.062283353 container remove 7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_driscoll, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:16:46 compute-0 systemd[1]: libpod-conmon-7dd22b7a0cfca51013086cc951f8bc2343aef5c2145cd7222b0c1772e1c29a54.scope: Deactivated successfully.
Jan 31 06:16:46 compute-0 sudo[242604]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:46 compute-0 sudo[242735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:16:46 compute-0 sudo[242735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:46 compute-0 sudo[242735]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:46 compute-0 sudo[242760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:16:46 compute-0 sudo[242760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:46 compute-0 podman[242796]: 2026-01-31 06:16:46.80844582 +0000 UTC m=+0.021949780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:46 compute-0 ceph-mon[75251]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:46 compute-0 podman[242796]: 2026-01-31 06:16:46.960334923 +0000 UTC m=+0.173838863 container create 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:16:47 compute-0 systemd[1]: Started libpod-conmon-4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291.scope.
Jan 31 06:16:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:47 compute-0 podman[242796]: 2026-01-31 06:16:47.136389357 +0000 UTC m=+0.349893287 container init 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:16:47 compute-0 podman[242796]: 2026-01-31 06:16:47.147268333 +0000 UTC m=+0.360772273 container start 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:16:47 compute-0 awesome_bouman[242812]: 167 167
Jan 31 06:16:47 compute-0 systemd[1]: libpod-4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291.scope: Deactivated successfully.
Jan 31 06:16:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:47 compute-0 podman[242796]: 2026-01-31 06:16:47.193519637 +0000 UTC m=+0.407023577 container attach 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 06:16:47 compute-0 podman[242796]: 2026-01-31 06:16:47.194778993 +0000 UTC m=+0.408282963 container died 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ede888c0ef7d1e38764b080a3f6585d7de6442a5e6bf367f75bd66802d4bdd0e-merged.mount: Deactivated successfully.
Jan 31 06:16:47 compute-0 podman[242796]: 2026-01-31 06:16:47.489908365 +0000 UTC m=+0.703412295 container remove 4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:16:47 compute-0 systemd[1]: libpod-conmon-4b74427bc6572b669a9ff75e25f3708896e491622744e8f2f1ac1852d7b28291.scope: Deactivated successfully.
Jan 31 06:16:47 compute-0 podman[242837]: 2026-01-31 06:16:47.601600744 +0000 UTC m=+0.025893411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:47 compute-0 podman[242837]: 2026-01-31 06:16:47.716242937 +0000 UTC m=+0.140535504 container create 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:16:47 compute-0 systemd[1]: Started libpod-conmon-15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618.scope.
Jan 31 06:16:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d230dd117e12d54e8627624f0d31672b00e0e563dfcaf438483b89e8680d0b1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d230dd117e12d54e8627624f0d31672b00e0e563dfcaf438483b89e8680d0b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d230dd117e12d54e8627624f0d31672b00e0e563dfcaf438483b89e8680d0b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d230dd117e12d54e8627624f0d31672b00e0e563dfcaf438483b89e8680d0b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:48 compute-0 podman[242837]: 2026-01-31 06:16:48.012559772 +0000 UTC m=+0.436852359 container init 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:16:48 compute-0 ceph-mon[75251]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:48 compute-0 podman[242837]: 2026-01-31 06:16:48.021309918 +0000 UTC m=+0.445602525 container start 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 06:16:48 compute-0 podman[242837]: 2026-01-31 06:16:48.179952671 +0000 UTC m=+0.604245268 container attach 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:16:48 compute-0 jovial_shannon[242854]: {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     "0": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "devices": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "/dev/loop3"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             ],
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_name": "ceph_lv0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_size": "21470642176",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "name": "ceph_lv0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "tags": {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_name": "ceph",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.crush_device_class": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.encrypted": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.objectstore": "bluestore",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_id": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.vdo": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.with_tpm": "0"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             },
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "vg_name": "ceph_vg0"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         }
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     ],
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     "1": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "devices": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "/dev/loop4"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             ],
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_name": "ceph_lv1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_size": "21470642176",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "name": "ceph_lv1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "tags": {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_name": "ceph",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.crush_device_class": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.encrypted": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.objectstore": "bluestore",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_id": "1",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.vdo": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.with_tpm": "0"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             },
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "vg_name": "ceph_vg1"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         }
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     ],
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     "2": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "devices": [
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "/dev/loop5"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             ],
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_name": "ceph_lv2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_size": "21470642176",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "name": "ceph_lv2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "tags": {
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.cluster_name": "ceph",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.crush_device_class": "",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.encrypted": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.objectstore": "bluestore",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osd_id": "2",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.vdo": "0",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:                 "ceph.with_tpm": "0"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             },
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "type": "block",
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:             "vg_name": "ceph_vg2"
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:         }
Jan 31 06:16:48 compute-0 jovial_shannon[242854]:     ]
Jan 31 06:16:48 compute-0 jovial_shannon[242854]: }
Jan 31 06:16:48 compute-0 systemd[1]: libpod-15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618.scope: Deactivated successfully.
Jan 31 06:16:48 compute-0 podman[242837]: 2026-01-31 06:16:48.343543754 +0000 UTC m=+0.767836331 container died 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d230dd117e12d54e8627624f0d31672b00e0e563dfcaf438483b89e8680d0b1d-merged.mount: Deactivated successfully.
Jan 31 06:16:48 compute-0 podman[242837]: 2026-01-31 06:16:48.41751355 +0000 UTC m=+0.841806117 container remove 15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shannon, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:16:48 compute-0 systemd[1]: libpod-conmon-15bbac4f55699a51cf87e2f49f1f2e495c69a18fdf0196a518fd729d05d59618.scope: Deactivated successfully.
Jan 31 06:16:48 compute-0 sudo[242760]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:48 compute-0 sudo[242877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:16:48 compute-0 sudo[242877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:48 compute-0 sudo[242877]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:48 compute-0 sudo[242902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:16:48 compute-0 sudo[242902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.80765387 +0000 UTC m=+0.051131533 container create bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:16:48 compute-0 systemd[1]: Started libpod-conmon-bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab.scope.
Jan 31 06:16:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.878845657 +0000 UTC m=+0.122323340 container init bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.788806038 +0000 UTC m=+0.032283711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.887040078 +0000 UTC m=+0.130517731 container start bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.890690761 +0000 UTC m=+0.134168444 container attach bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 06:16:48 compute-0 funny_goldstine[242956]: 167 167
Jan 31 06:16:48 compute-0 systemd[1]: libpod-bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab.scope: Deactivated successfully.
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.892269976 +0000 UTC m=+0.135747639 container died bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c26d626d0bdc4f199eaf1f4a95ab2fd056e13e7d519e26b85531af7a41c0441d-merged.mount: Deactivated successfully.
Jan 31 06:16:48 compute-0 podman[242940]: 2026-01-31 06:16:48.925950695 +0000 UTC m=+0.169428348 container remove bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_goldstine, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:16:48 compute-0 systemd[1]: libpod-conmon-bec1edf8f24b5108a4916aba9388fc1c9e907c179ff16738db3b4a9d6ede4eab.scope: Deactivated successfully.
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.087975553 +0000 UTC m=+0.041942174 container create c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:16:49 compute-0 systemd[1]: Started libpod-conmon-c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3.scope.
Jan 31 06:16:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731d528ef6a7f0730b8ef8234de3caf927540ce66e32d2c69b3fb342474f853e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731d528ef6a7f0730b8ef8234de3caf927540ce66e32d2c69b3fb342474f853e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731d528ef6a7f0730b8ef8234de3caf927540ce66e32d2c69b3fb342474f853e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731d528ef6a7f0730b8ef8234de3caf927540ce66e32d2c69b3fb342474f853e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.149755395 +0000 UTC m=+0.103722046 container init c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.154182549 +0000 UTC m=+0.108149170 container start c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.158424729 +0000 UTC m=+0.112391360 container attach c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.067877066 +0000 UTC m=+0.021843707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:16:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:49 compute-0 lvm[243074]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:16:49 compute-0 lvm[243073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:16:49 compute-0 lvm[243074]: VG ceph_vg1 finished
Jan 31 06:16:49 compute-0 lvm[243073]: VG ceph_vg0 finished
Jan 31 06:16:49 compute-0 lvm[243076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:16:49 compute-0 lvm[243076]: VG ceph_vg2 finished
Jan 31 06:16:49 compute-0 focused_cartwright[242995]: {}
Jan 31 06:16:49 compute-0 systemd[1]: libpod-c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3.scope: Deactivated successfully.
Jan 31 06:16:49 compute-0 systemd[1]: libpod-c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3.scope: Consumed 1.190s CPU time.
Jan 31 06:16:49 compute-0 podman[242978]: 2026-01-31 06:16:49.989720058 +0000 UTC m=+0.943686679 container died c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 06:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-731d528ef6a7f0730b8ef8234de3caf927540ce66e32d2c69b3fb342474f853e-merged.mount: Deactivated successfully.
Jan 31 06:16:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:16:50.210 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:16:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:16:50.213 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:16:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:16:50.213 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:16:50 compute-0 podman[242978]: 2026-01-31 06:16:50.338526753 +0000 UTC m=+1.292493374 container remove c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:16:50 compute-0 systemd[1]: libpod-conmon-c5de24233ed7add80da61ad0f429218fa16b85aa6f86ac10b16a85c1cd1e8ce3.scope: Deactivated successfully.
Jan 31 06:16:50 compute-0 sudo[242902]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:16:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:16:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:50 compute-0 sudo[243092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:16:50 compute-0 sudo[243092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:16:50 compute-0 sudo[243092]: pam_unix(sudo:session): session closed for user root
Jan 31 06:16:50 compute-0 ceph-mon[75251]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:50 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:16:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:52 compute-0 ceph-mon[75251]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.063 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.064 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.204 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.204 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.204 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:16:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:16:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086263564' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:16:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:16:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2086263564' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.319 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.319 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.319 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.320 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.320 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.320 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.320 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.321 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.321 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.522 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.523 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.523 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.523 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:16:54 compute-0 nova_compute[239679]: 2026-01-31 06:16:54.524 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:16:54 compute-0 ceph-mon[75251]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2086263564' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:16:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2086263564' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:16:55 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:16:55 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715994339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.089 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.216 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.218 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.218 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.218 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:16:55 compute-0 ceph-osd[86016]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000021s
Jan 31 06:16:55 compute-0 ceph-osd[88127]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000037s
Jan 31 06:16:55 compute-0 ceph-osd[87070]: bluestore.MempoolThread fragmentation_score=0.000145 took=0.000037s
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.757 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.758 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:16:55 compute-0 nova_compute[239679]: 2026-01-31 06:16:55.776 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:16:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:16:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3715994339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:55 compute-0 ceph-mon[75251]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:16:56 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062683056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:56 compute-0 nova_compute[239679]: 2026-01-31 06:16:56.276 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:16:56 compute-0 nova_compute[239679]: 2026-01-31 06:16:56.281 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:16:56 compute-0 nova_compute[239679]: 2026-01-31 06:16:56.343 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:16:56 compute-0 nova_compute[239679]: 2026-01-31 06:16:56.344 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:16:56 compute-0 nova_compute[239679]: 2026-01-31 06:16:56.345 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:16:56 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1062683056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:16:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:16:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:58 compute-0 ceph-mon[75251]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:16:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:00 compute-0 ceph-mon[75251]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:02 compute-0 ceph-mon[75251]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:04 compute-0 ceph-mon[75251]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:07 compute-0 ceph-mon[75251]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:08 compute-0 ceph-mon[75251]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:09 compute-0 ceph-mon[75251]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:12 compute-0 ceph-mon[75251]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:14 compute-0 ceph-mon[75251]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:14 compute-0 podman[243162]: 2026-01-31 06:17:14.146277589 +0000 UTC m=+0.063744228 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 06:17:14 compute-0 podman[243163]: 2026-01-31 06:17:14.150681303 +0000 UTC m=+0.068452421 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:16 compute-0 ceph-mon[75251]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:18 compute-0 ceph-mon[75251]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:20 compute-0 ceph-mon[75251]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:22 compute-0 ceph-mon[75251]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:25 compute-0 ceph-mon[75251]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:26 compute-0 ceph-mon[75251]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:28 compute-0 ceph-mon[75251]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:30 compute-0 ceph-mon[75251]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:32 compute-0 ceph-mon[75251]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:34 compute-0 ceph-mon[75251]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:36 compute-0 ceph-mon[75251]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:38 compute-0 ceph-mon[75251]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:39 compute-0 nova_compute[239679]: 2026-01-31 06:17:39.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:39 compute-0 nova_compute[239679]: 2026-01-31 06:17:39.508 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 06:17:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:40 compute-0 ceph-mon[75251]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:40 compute-0 nova_compute[239679]: 2026-01-31 06:17:40.800 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 06:17:40 compute-0 nova_compute[239679]: 2026-01-31 06:17:40.801 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:40 compute-0 nova_compute[239679]: 2026-01-31 06:17:40.801 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 06:17:40 compute-0 nova_compute[239679]: 2026-01-31 06:17:40.843 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:42 compute-0 ceph-mon[75251]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.887 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.887 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.888 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.949 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.950 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.950 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.951 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:17:42 compute-0 nova_compute[239679]: 2026-01-31 06:17:42.951 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:17:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:17:43 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097024004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.444 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.561 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.563 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5154MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.563 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.563 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.679 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.679 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:17:43 compute-0 nova_compute[239679]: 2026-01-31 06:17:43.698 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:17:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:43 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3097024004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:17:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:17:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739160714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:17:44 compute-0 nova_compute[239679]: 2026-01-31 06:17:44.349 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:17:44 compute-0 nova_compute[239679]: 2026-01-31 06:17:44.353 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:17:44 compute-0 nova_compute[239679]: 2026-01-31 06:17:44.390 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:17:44 compute-0 nova_compute[239679]: 2026-01-31 06:17:44.392 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:17:44 compute-0 nova_compute[239679]: 2026-01-31 06:17:44.392 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:17:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:17:44
Jan 31 06:17:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:17:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:17:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 31 06:17:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:17:44 compute-0 ceph-mon[75251]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:44 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/739160714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.013 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.014 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.014 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.100 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.100 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.100 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.100 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.101 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 podman[243251]: 2026-01-31 06:17:45.114890939 +0000 UTC m=+0.041321956 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 06:17:45 compute-0 podman[243250]: 2026-01-31 06:17:45.133238336 +0000 UTC m=+0.061083623 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:17:45 compute-0 nova_compute[239679]: 2026-01-31 06:17:45.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:46 compute-0 ceph-mon[75251]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:46 compute-0 nova_compute[239679]: 2026-01-31 06:17:46.503 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:17:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:48 compute-0 ceph-mon[75251]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:17:50.211 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:17:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:17:50.211 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:17:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:17:50.211 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:17:50 compute-0 sudo[243293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:17:50 compute-0 sudo[243293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:50 compute-0 sudo[243293]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:50 compute-0 sudo[243318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 06:17:50 compute-0 sudo[243318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:50 compute-0 ceph-mon[75251]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:51 compute-0 podman[243383]: 2026-01-31 06:17:51.217871816 +0000 UTC m=+0.146010758 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 06:17:51 compute-0 podman[243383]: 2026-01-31 06:17:51.311464855 +0000 UTC m=+0.239603777 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:17:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:52 compute-0 sudo[243318]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:52 compute-0 sudo[243573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:17:52 compute-0 sudo[243573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:52 compute-0 sudo[243573]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:52 compute-0 sudo[243598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:17:52 compute-0 sudo[243598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:52 compute-0 sudo[243598]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:17:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:17:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:17:53 compute-0 sudo[243653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:17:53 compute-0 sudo[243653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:53 compute-0 sudo[243653]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:53 compute-0 sudo[243678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:17:53 compute-0 sudo[243678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.427399725 +0000 UTC m=+0.043937199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.581248173 +0000 UTC m=+0.197785557 container create 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:17:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:53 compute-0 systemd[1]: Started libpod-conmon-08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982.scope.
Jan 31 06:17:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.93156617 +0000 UTC m=+0.548103644 container init 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.937264951 +0000 UTC m=+0.553802385 container start 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:17:53 compute-0 vigilant_rhodes[243732]: 167 167
Jan 31 06:17:53 compute-0 systemd[1]: libpod-08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982.scope: Deactivated successfully.
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.963028457 +0000 UTC m=+0.579565841 container attach 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:17:53 compute-0 podman[243716]: 2026-01-31 06:17:53.96348441 +0000 UTC m=+0.580021794 container died 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 06:17:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:17:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:17:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:17:54 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:17:54 compute-0 ceph-mon[75251]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c1e73ccb397456050980fb6a52932489d89d202a82fe22748a1933ca7186fb-merged.mount: Deactivated successfully.
Jan 31 06:17:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:17:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/811984065' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:17:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:17:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/811984065' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:17:55 compute-0 podman[243716]: 2026-01-31 06:17:55.205669214 +0000 UTC m=+1.822206588 container remove 08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:17:55 compute-0 systemd[1]: libpod-conmon-08f95d5ac300bae50ab6141e5fee287770f9e4c7673f0875887aaad0af277982.scope: Deactivated successfully.
Jan 31 06:17:55 compute-0 podman[243757]: 2026-01-31 06:17:55.317510997 +0000 UTC m=+0.029563854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:17:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/811984065' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:17:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/811984065' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:17:55 compute-0 podman[243757]: 2026-01-31 06:17:55.623226638 +0000 UTC m=+0.335279465 container create c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:17:55 compute-0 systemd[1]: Started libpod-conmon-c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76.scope.
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:17:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:17:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:56 compute-0 podman[243757]: 2026-01-31 06:17:56.062629527 +0000 UTC m=+0.774682424 container init c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:17:56 compute-0 podman[243757]: 2026-01-31 06:17:56.070673384 +0000 UTC m=+0.782726241 container start c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 06:17:56 compute-0 podman[243757]: 2026-01-31 06:17:56.105614889 +0000 UTC m=+0.817667746 container attach c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 06:17:56 compute-0 stupefied_jang[243774]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:17:56 compute-0 stupefied_jang[243774]: --> All data devices are unavailable
Jan 31 06:17:56 compute-0 systemd[1]: libpod-c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76.scope: Deactivated successfully.
Jan 31 06:17:56 compute-0 podman[243757]: 2026-01-31 06:17:56.473618425 +0000 UTC m=+1.185671272 container died c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:17:56 compute-0 ceph-mon[75251]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4de4a8b31b4ded7b212ef7d26c20bb2d9f91267b3c8ef47c57415d577c273d3a-merged.mount: Deactivated successfully.
Jan 31 06:17:57 compute-0 podman[243757]: 2026-01-31 06:17:57.055769059 +0000 UTC m=+1.767821916 container remove c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:17:57 compute-0 systemd[1]: libpod-conmon-c0cddbd69f80e669fb2d7ecfa3da5378eaeb4f276ecff6ab71585137b6d8ce76.scope: Deactivated successfully.
Jan 31 06:17:57 compute-0 sudo[243678]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:57 compute-0 sudo[243806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:17:57 compute-0 sudo[243806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:57 compute-0 sudo[243806]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:57 compute-0 sudo[243831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:17:57 compute-0 sudo[243831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.445830817 +0000 UTC m=+0.022497656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.697299907 +0000 UTC m=+0.273966746 container create 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 06:17:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:57 compute-0 systemd[1]: Started libpod-conmon-3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853.scope.
Jan 31 06:17:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.853629345 +0000 UTC m=+0.430296274 container init 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.859782749 +0000 UTC m=+0.436449588 container start 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:17:57 compute-0 relaxed_meitner[243884]: 167 167
Jan 31 06:17:57 compute-0 systemd[1]: libpod-3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853.scope: Deactivated successfully.
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.881779929 +0000 UTC m=+0.458446878 container attach 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:17:57 compute-0 podman[243868]: 2026-01-31 06:17:57.882490979 +0000 UTC m=+0.459157818 container died 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c13148a8e2a5c7bc4492cc92fa86050b91a94e5d904a4307b80d3983d390d0e8-merged.mount: Deactivated successfully.
Jan 31 06:17:58 compute-0 podman[243868]: 2026-01-31 06:17:58.112109963 +0000 UTC m=+0.688776792 container remove 3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 06:17:58 compute-0 systemd[1]: libpod-conmon-3a2927a3fd3412e487bb28bd39baf6a5b463ab8c6a5beb2ca21ca58e70ab7853.scope: Deactivated successfully.
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.259493709 +0000 UTC m=+0.073842223 container create f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.203942333 +0000 UTC m=+0.018290847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:17:58 compute-0 systemd[1]: Started libpod-conmon-f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f.scope.
Jan 31 06:17:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13b75ff7003a46dc75b6e15435a0ba092035f9046d1a58dcf671f9518fa368e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13b75ff7003a46dc75b6e15435a0ba092035f9046d1a58dcf671f9518fa368e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13b75ff7003a46dc75b6e15435a0ba092035f9046d1a58dcf671f9518fa368e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f13b75ff7003a46dc75b6e15435a0ba092035f9046d1a58dcf671f9518fa368e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.465844187 +0000 UTC m=+0.280192701 container init f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.470779217 +0000 UTC m=+0.285127741 container start f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.499190968 +0000 UTC m=+0.313539472 container attach f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]: {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     "0": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "devices": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "/dev/loop3"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             ],
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_name": "ceph_lv0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_size": "21470642176",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "name": "ceph_lv0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "tags": {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_name": "ceph",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.crush_device_class": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.encrypted": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.objectstore": "bluestore",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_id": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.vdo": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.with_tpm": "0"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             },
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "vg_name": "ceph_vg0"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         }
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     ],
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     "1": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "devices": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "/dev/loop4"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             ],
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_name": "ceph_lv1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_size": "21470642176",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "name": "ceph_lv1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "tags": {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_name": "ceph",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.crush_device_class": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.encrypted": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.objectstore": "bluestore",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_id": "1",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.vdo": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.with_tpm": "0"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             },
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "vg_name": "ceph_vg1"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         }
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     ],
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     "2": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "devices": [
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "/dev/loop5"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             ],
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_name": "ceph_lv2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_size": "21470642176",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "name": "ceph_lv2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "tags": {
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.cluster_name": "ceph",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.crush_device_class": "",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.encrypted": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.objectstore": "bluestore",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osd_id": "2",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.vdo": "0",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:                 "ceph.with_tpm": "0"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             },
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "type": "block",
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:             "vg_name": "ceph_vg2"
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:         }
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]:     ]
Jan 31 06:17:58 compute-0 condescending_kowalevski[243926]: }
Jan 31 06:17:58 compute-0 systemd[1]: libpod-f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f.scope: Deactivated successfully.
Jan 31 06:17:58 compute-0 podman[243909]: 2026-01-31 06:17:58.803496748 +0000 UTC m=+0.617845232 container died f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:17:59 compute-0 ceph-mon[75251]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f13b75ff7003a46dc75b6e15435a0ba092035f9046d1a58dcf671f9518fa368e-merged.mount: Deactivated successfully.
Jan 31 06:17:59 compute-0 podman[243909]: 2026-01-31 06:17:59.432966046 +0000 UTC m=+1.247314580 container remove f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:17:59 compute-0 systemd[1]: libpod-conmon-f227a99b2c1ff0502466ffbdec4449730b726d123e675d4bcab397108f04bb1f.scope: Deactivated successfully.
Jan 31 06:17:59 compute-0 sudo[243831]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:59 compute-0 sudo[243948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:17:59 compute-0 sudo[243948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:59 compute-0 sudo[243948]: pam_unix(sudo:session): session closed for user root
Jan 31 06:17:59 compute-0 sudo[243973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:17:59 compute-0 sudo[243973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:17:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:17:59 compute-0 podman[244011]: 2026-01-31 06:17:59.842601336 +0000 UTC m=+0.024772740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:18:00 compute-0 podman[244011]: 2026-01-31 06:18:00.126983864 +0000 UTC m=+0.309155218 container create e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:18:00 compute-0 ceph-mon[75251]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:00 compute-0 systemd[1]: Started libpod-conmon-e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329.scope.
Jan 31 06:18:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:18:00 compute-0 podman[244011]: 2026-01-31 06:18:00.726889788 +0000 UTC m=+0.909061102 container init e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:18:00 compute-0 podman[244011]: 2026-01-31 06:18:00.735556972 +0000 UTC m=+0.917728296 container start e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:18:00 compute-0 infallible_nash[244027]: 167 167
Jan 31 06:18:00 compute-0 systemd[1]: libpod-e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329.scope: Deactivated successfully.
Jan 31 06:18:01 compute-0 podman[244011]: 2026-01-31 06:18:01.047651411 +0000 UTC m=+1.229822725 container attach e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:18:01 compute-0 podman[244011]: 2026-01-31 06:18:01.048073063 +0000 UTC m=+1.230244377 container died e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c994ed022ac6e1f243a403dad455a05318934a0f124f006108a5a04bd237acf-merged.mount: Deactivated successfully.
Jan 31 06:18:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:01 compute-0 podman[244011]: 2026-01-31 06:18:01.77925232 +0000 UTC m=+1.961423644 container remove e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:18:01 compute-0 systemd[1]: libpod-conmon-e632dc53c73fad9e635fc930effabbf96fe983239391dfa7d7bf15cf319e0329.scope: Deactivated successfully.
Jan 31 06:18:01 compute-0 podman[244052]: 2026-01-31 06:18:01.940929589 +0000 UTC m=+0.083863926 container create 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:18:01 compute-0 podman[244052]: 2026-01-31 06:18:01.875150774 +0000 UTC m=+0.018085131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:18:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:02 compute-0 systemd[1]: Started libpod-conmon-9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890.scope.
Jan 31 06:18:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcb4c0cf84f3eb3189ed19d15ee97ca2ca2c7c8b4f36c100a8b6a7c5feea489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcb4c0cf84f3eb3189ed19d15ee97ca2ca2c7c8b4f36c100a8b6a7c5feea489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcb4c0cf84f3eb3189ed19d15ee97ca2ca2c7c8b4f36c100a8b6a7c5feea489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:18:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcb4c0cf84f3eb3189ed19d15ee97ca2ca2c7c8b4f36c100a8b6a7c5feea489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:18:02 compute-0 ceph-mon[75251]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:02 compute-0 podman[244052]: 2026-01-31 06:18:02.768642317 +0000 UTC m=+0.911576734 container init 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 06:18:02 compute-0 podman[244052]: 2026-01-31 06:18:02.775647674 +0000 UTC m=+0.918582031 container start 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:18:02 compute-0 podman[244052]: 2026-01-31 06:18:02.946841211 +0000 UTC m=+1.089775538 container attach 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:18:03 compute-0 lvm[244144]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:18:03 compute-0 lvm[244144]: VG ceph_vg0 finished
Jan 31 06:18:03 compute-0 lvm[244147]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:18:03 compute-0 lvm[244147]: VG ceph_vg1 finished
Jan 31 06:18:03 compute-0 lvm[244149]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:18:03 compute-0 lvm[244149]: VG ceph_vg2 finished
Jan 31 06:18:03 compute-0 magical_proskuriakova[244068]: {}
Jan 31 06:18:03 compute-0 systemd[1]: libpod-9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890.scope: Deactivated successfully.
Jan 31 06:18:03 compute-0 podman[244052]: 2026-01-31 06:18:03.509501126 +0000 UTC m=+1.652435453 container died 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:18:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bcb4c0cf84f3eb3189ed19d15ee97ca2ca2c7c8b4f36c100a8b6a7c5feea489-merged.mount: Deactivated successfully.
Jan 31 06:18:03 compute-0 podman[244052]: 2026-01-31 06:18:03.574154429 +0000 UTC m=+1.717088746 container remove 9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 06:18:03 compute-0 systemd[1]: libpod-conmon-9a0114d5f2b08aa25ae3e41b851f21dcea0d0b0820eb9a68c7003c1e8be03890.scope: Deactivated successfully.
Jan 31 06:18:03 compute-0 sudo[243973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:18:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:18:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:18:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:18:03 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:18:03 compute-0 sudo[244163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:18:03 compute-0 sudo[244163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:18:03 compute-0 sudo[244163]: pam_unix(sudo:session): session closed for user root
Jan 31 06:18:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:18:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:18:04 compute-0 ceph-mon[75251]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:06 compute-0 ceph-mon[75251]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:08 compute-0 ceph-mon[75251]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:10 compute-0 ceph-mon[75251]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:12 compute-0 ceph-mon[75251]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:13 compute-0 ceph-mon[75251]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:16 compute-0 podman[244189]: 2026-01-31 06:18:16.130942664 +0000 UTC m=+0.047664045 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:18:16 compute-0 podman[244188]: 2026-01-31 06:18:16.148918891 +0000 UTC m=+0.067435142 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 06:18:16 compute-0 ceph-mon[75251]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:18 compute-0 ceph-mon[75251]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:20 compute-0 ceph-mon[75251]: pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:21 compute-0 ceph-mon[75251]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:24 compute-0 ceph-mon[75251]: pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:26 compute-0 ceph-mon[75251]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:28 compute-0 ceph-mon[75251]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:30 compute-0 ceph-mon[75251]: pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:32 compute-0 ceph-mon[75251]: pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:34 compute-0 ceph-mon[75251]: pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:36 compute-0 ceph-mon[75251]: pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:38 compute-0 ceph-mon[75251]: pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:40 compute-0 ceph-mon[75251]: pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.851 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.852 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.852 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.852 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:18:41 compute-0 nova_compute[239679]: 2026-01-31 06:18:41.852 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:18:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:18:42 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/748801418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:18:42 compute-0 nova_compute[239679]: 2026-01-31 06:18:42.355 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:18:42 compute-0 nova_compute[239679]: 2026-01-31 06:18:42.463 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:18:42 compute-0 nova_compute[239679]: 2026-01-31 06:18:42.464 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:18:42 compute-0 nova_compute[239679]: 2026-01-31 06:18:42.465 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:18:42 compute-0 nova_compute[239679]: 2026-01-31 06:18:42.465 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:18:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:42 compute-0 ceph-mon[75251]: pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:42 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/748801418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.390 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.391 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.483 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing inventories for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.540 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating ProviderTree inventory for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.540 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating inventory in ProviderTree for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.561 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing aggregate associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.587 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing trait associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 06:18:43 compute-0 nova_compute[239679]: 2026-01-31 06:18:43.611 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:18:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:18:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962360055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:18:44 compute-0 nova_compute[239679]: 2026-01-31 06:18:44.147 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:18:44 compute-0 nova_compute[239679]: 2026-01-31 06:18:44.152 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:18:44 compute-0 nova_compute[239679]: 2026-01-31 06:18:44.270 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:18:44 compute-0 nova_compute[239679]: 2026-01-31 06:18:44.272 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:18:44 compute-0 nova_compute[239679]: 2026-01-31 06:18:44.272 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:18:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:18:44
Jan 31 06:18:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:18:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:18:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'images', '.mgr', 'default.rgw.log', 'vms']
Jan 31 06:18:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:18:44 compute-0 ceph-mon[75251]: pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:44 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3962360055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:18:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:45 compute-0 ceph-mon[75251]: pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:46 compute-0 nova_compute[239679]: 2026-01-31 06:18:46.273 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:46 compute-0 nova_compute[239679]: 2026-01-31 06:18:46.643 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:46 compute-0 nova_compute[239679]: 2026-01-31 06:18:46.644 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:18:46 compute-0 nova_compute[239679]: 2026-01-31 06:18:46.644 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.069 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.069 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.070 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.070 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.070 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.070 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.071 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.071 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:18:47 compute-0 podman[244277]: 2026-01-31 06:18:47.138909034 +0000 UTC m=+0.054215896 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 06:18:47 compute-0 podman[244276]: 2026-01-31 06:18:47.239013075 +0000 UTC m=+0.151612370 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 06:18:47 compute-0 nova_compute[239679]: 2026-01-31 06:18:47.301 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:18:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:48 compute-0 ceph-mon[75251]: pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:18:50.212 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:18:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:18:50.212 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:18:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:18:50.212 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:18:50 compute-0 ceph-mon[75251]: pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:52 compute-0 ceph-mon[75251]: pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:18:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144804053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:18:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:18:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2144804053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:18:54 compute-0 ceph-mon[75251]: pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2144804053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:18:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2144804053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:18:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:18:56 compute-0 ceph-mon[75251]: pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:18:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:58 compute-0 ceph-mon[75251]: pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:18:59 compute-0 ceph-mon[75251]: pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:02 compute-0 ceph-mon[75251]: pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:03 compute-0 sudo[244320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:19:03 compute-0 sudo[244320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:03 compute-0 sudo[244320]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:03 compute-0 sudo[244345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:19:03 compute-0 sudo[244345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:04 compute-0 sudo[244345]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:19:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:19:04 compute-0 sudo[244402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:19:04 compute-0 sudo[244402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:04 compute-0 sudo[244402]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:04 compute-0 sudo[244427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:19:04 compute-0 sudo[244427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:04 compute-0 podman[244464]: 2026-01-31 06:19:04.70027327 +0000 UTC m=+0.057363896 container create 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 06:19:04 compute-0 podman[244464]: 2026-01-31 06:19:04.66025681 +0000 UTC m=+0.017347496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:04 compute-0 ceph-mon[75251]: pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:19:04 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:19:05 compute-0 systemd[1]: Started libpod-conmon-5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9.scope.
Jan 31 06:19:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:05 compute-0 podman[244464]: 2026-01-31 06:19:05.113580773 +0000 UTC m=+0.470671419 container init 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:19:05 compute-0 podman[244464]: 2026-01-31 06:19:05.121310443 +0000 UTC m=+0.478401069 container start 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:19:05 compute-0 cranky_goldberg[244480]: 167 167
Jan 31 06:19:05 compute-0 systemd[1]: libpod-5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9.scope: Deactivated successfully.
Jan 31 06:19:05 compute-0 podman[244464]: 2026-01-31 06:19:05.14786777 +0000 UTC m=+0.504958416 container attach 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:19:05 compute-0 podman[244464]: 2026-01-31 06:19:05.148956501 +0000 UTC m=+0.506047167 container died 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-66234e1ed4389bc54f7580bd5f4e9cad27dfca0c0ca7f306df6a2a6c0fd544e7-merged.mount: Deactivated successfully.
Jan 31 06:19:05 compute-0 podman[244464]: 2026-01-31 06:19:05.406026354 +0000 UTC m=+0.763117020 container remove 5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 06:19:05 compute-0 systemd[1]: libpod-conmon-5d00e4beba1cc8949466cba336b038296508a6f899dca58069775de94142afd9.scope: Deactivated successfully.
Jan 31 06:19:05 compute-0 podman[244504]: 2026-01-31 06:19:05.546957639 +0000 UTC m=+0.061041860 container create 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:19:05 compute-0 systemd[1]: Started libpod-conmon-49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d.scope.
Jan 31 06:19:05 compute-0 podman[244504]: 2026-01-31 06:19:05.504614033 +0000 UTC m=+0.018698274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:05 compute-0 podman[244504]: 2026-01-31 06:19:05.708450589 +0000 UTC m=+0.222534890 container init 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 06:19:05 compute-0 podman[244504]: 2026-01-31 06:19:05.715258093 +0000 UTC m=+0.229342314 container start 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:19:05 compute-0 podman[244504]: 2026-01-31 06:19:05.74253092 +0000 UTC m=+0.256615161 container attach 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 06:19:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:06 compute-0 ceph-mon[75251]: pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:06 compute-0 pedantic_johnson[244521]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:19:06 compute-0 pedantic_johnson[244521]: --> All data devices are unavailable
Jan 31 06:19:06 compute-0 systemd[1]: libpod-49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d.scope: Deactivated successfully.
Jan 31 06:19:06 compute-0 podman[244504]: 2026-01-31 06:19:06.098876972 +0000 UTC m=+0.612961203 container died 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 06:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad7c99d1352ecf09fc8eb0b94cb1537281f743889b8cd3665ce3d365b722526c-merged.mount: Deactivated successfully.
Jan 31 06:19:06 compute-0 podman[244504]: 2026-01-31 06:19:06.280192297 +0000 UTC m=+0.794276528 container remove 49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:19:06 compute-0 systemd[1]: libpod-conmon-49dcec4335bf3f9ee0e0973d7a95df3cd89c9d6da3695e79579473da53ddbb9d.scope: Deactivated successfully.
Jan 31 06:19:06 compute-0 sudo[244427]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:06 compute-0 sudo[244553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:19:06 compute-0 sudo[244553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:06 compute-0 sudo[244553]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:06 compute-0 sudo[244578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:19:06 compute-0 sudo[244578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.694472869 +0000 UTC m=+0.067248277 container create 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:19:06 compute-0 systemd[1]: Started libpod-conmon-64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78.scope.
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.655946202 +0000 UTC m=+0.028721660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.79876459 +0000 UTC m=+0.171539998 container init 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.804096122 +0000 UTC m=+0.176871530 container start 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:19:06 compute-0 bold_moore[244632]: 167 167
Jan 31 06:19:06 compute-0 systemd[1]: libpod-64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78.scope: Deactivated successfully.
Jan 31 06:19:06 compute-0 conmon[244632]: conmon 64878912d6d11f641fe7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78.scope/container/memory.events
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.816152246 +0000 UTC m=+0.188927674 container attach 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:19:06 compute-0 podman[244617]: 2026-01-31 06:19:06.816693041 +0000 UTC m=+0.189468449 container died 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 06:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-00e7c8da2a640c5290b2045848f52f195c3d59eae830d5a84dbec6d3aa7b194a-merged.mount: Deactivated successfully.
Jan 31 06:19:07 compute-0 podman[244617]: 2026-01-31 06:19:07.045383546 +0000 UTC m=+0.418158994 container remove 64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:19:07 compute-0 systemd[1]: libpod-conmon-64878912d6d11f641fe7cc2495dde076eb39198206c36fa9f24d7511999d3c78.scope: Deactivated successfully.
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.2280381 +0000 UTC m=+0.060031242 container create ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.187050812 +0000 UTC m=+0.019043984 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:07 compute-0 systemd[1]: Started libpod-conmon-ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602.scope.
Jan 31 06:19:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f00f98970aa01984d3f4115a190e6b1ec72df28c838504210560362a371602/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f00f98970aa01984d3f4115a190e6b1ec72df28c838504210560362a371602/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f00f98970aa01984d3f4115a190e6b1ec72df28c838504210560362a371602/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f00f98970aa01984d3f4115a190e6b1ec72df28c838504210560362a371602/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.41797315 +0000 UTC m=+0.249966392 container init ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.422910921 +0000 UTC m=+0.254904083 container start ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.486169023 +0000 UTC m=+0.318162205 container attach ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:19:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:07 compute-0 loving_khayyam[244672]: {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     "0": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "devices": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "/dev/loop3"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             ],
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_name": "ceph_lv0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_size": "21470642176",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "name": "ceph_lv0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "tags": {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_name": "ceph",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.crush_device_class": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.encrypted": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.objectstore": "bluestore",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_id": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.vdo": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.with_tpm": "0"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             },
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "vg_name": "ceph_vg0"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         }
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     ],
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     "1": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "devices": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "/dev/loop4"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             ],
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_name": "ceph_lv1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_size": "21470642176",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "name": "ceph_lv1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "tags": {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_name": "ceph",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.crush_device_class": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.encrypted": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.objectstore": "bluestore",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_id": "1",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.vdo": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.with_tpm": "0"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             },
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "vg_name": "ceph_vg1"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         }
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     ],
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     "2": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "devices": [
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "/dev/loop5"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             ],
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_name": "ceph_lv2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_size": "21470642176",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "name": "ceph_lv2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "tags": {
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.cluster_name": "ceph",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.crush_device_class": "",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.encrypted": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.objectstore": "bluestore",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osd_id": "2",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.vdo": "0",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:                 "ceph.with_tpm": "0"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             },
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "type": "block",
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:             "vg_name": "ceph_vg2"
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:         }
Jan 31 06:19:07 compute-0 loving_khayyam[244672]:     ]
Jan 31 06:19:07 compute-0 loving_khayyam[244672]: }
Jan 31 06:19:07 compute-0 systemd[1]: libpod-ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602.scope: Deactivated successfully.
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.679679036 +0000 UTC m=+0.511672208 container died ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 06:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f00f98970aa01984d3f4115a190e6b1ec72df28c838504210560362a371602-merged.mount: Deactivated successfully.
Jan 31 06:19:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:07 compute-0 podman[244656]: 2026-01-31 06:19:07.816529315 +0000 UTC m=+0.648522467 container remove ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:19:07 compute-0 systemd[1]: libpod-conmon-ef1786189d4c02e87b8e71eb5c47f8c1c563f0644eb0b2bd488a2e1951544602.scope: Deactivated successfully.
Jan 31 06:19:07 compute-0 sudo[244578]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:07 compute-0 sudo[244696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:19:07 compute-0 sudo[244696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:07 compute-0 sudo[244696]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:07 compute-0 sudo[244721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:19:07 compute-0 sudo[244721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.248789219 +0000 UTC m=+0.039783245 container create df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:19:08 compute-0 systemd[1]: Started libpod-conmon-df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea.scope.
Jan 31 06:19:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.325413742 +0000 UTC m=+0.116407758 container init df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.227946275 +0000 UTC m=+0.018940281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.33132842 +0000 UTC m=+0.122322406 container start df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.335279093 +0000 UTC m=+0.126273069 container attach df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:19:08 compute-0 systemd[1]: libpod-df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea.scope: Deactivated successfully.
Jan 31 06:19:08 compute-0 keen_mirzakhani[244774]: 167 167
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.337235989 +0000 UTC m=+0.128229985 container died df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3fddd951edf1fc94cab063d3899245485b0f6cc7b2986020d7fa9ee682904f-merged.mount: Deactivated successfully.
Jan 31 06:19:08 compute-0 podman[244757]: 2026-01-31 06:19:08.374397367 +0000 UTC m=+0.165391353 container remove df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mirzakhani, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 06:19:08 compute-0 systemd[1]: libpod-conmon-df6aa0ad7f85d5435d1ba9511f8b68bd5d7038b8735b3d4d817b144a507b23ea.scope: Deactivated successfully.
Jan 31 06:19:08 compute-0 podman[244799]: 2026-01-31 06:19:08.532076909 +0000 UTC m=+0.065130856 container create ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:19:08 compute-0 podman[244799]: 2026-01-31 06:19:08.485601955 +0000 UTC m=+0.018655912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:19:08 compute-0 systemd[1]: Started libpod-conmon-ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08.scope.
Jan 31 06:19:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac756e4dd573b5ac5ee7ed83d06f6c9652f00b7707c29ef46f72d3e45921fde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac756e4dd573b5ac5ee7ed83d06f6c9652f00b7707c29ef46f72d3e45921fde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac756e4dd573b5ac5ee7ed83d06f6c9652f00b7707c29ef46f72d3e45921fde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac756e4dd573b5ac5ee7ed83d06f6c9652f00b7707c29ef46f72d3e45921fde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:19:08 compute-0 podman[244799]: 2026-01-31 06:19:08.680995811 +0000 UTC m=+0.214049778 container init ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:19:08 compute-0 podman[244799]: 2026-01-31 06:19:08.686862968 +0000 UTC m=+0.219916905 container start ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:19:08 compute-0 podman[244799]: 2026-01-31 06:19:08.692107227 +0000 UTC m=+0.225161164 container attach ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:19:08 compute-0 ceph-mon[75251]: pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:09 compute-0 lvm[244894]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:19:09 compute-0 lvm[244894]: VG ceph_vg0 finished
Jan 31 06:19:09 compute-0 lvm[244897]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:19:09 compute-0 lvm[244897]: VG ceph_vg1 finished
Jan 31 06:19:09 compute-0 lvm[244899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:19:09 compute-0 lvm[244899]: VG ceph_vg2 finished
Jan 31 06:19:09 compute-0 suspicious_varahamihira[244817]: {}
Jan 31 06:19:09 compute-0 systemd[1]: libpod-ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08.scope: Deactivated successfully.
Jan 31 06:19:09 compute-0 systemd[1]: libpod-ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08.scope: Consumed 1.125s CPU time.
Jan 31 06:19:09 compute-0 podman[244799]: 2026-01-31 06:19:09.496636927 +0000 UTC m=+1.029690864 container died ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac756e4dd573b5ac5ee7ed83d06f6c9652f00b7707c29ef46f72d3e45921fde-merged.mount: Deactivated successfully.
Jan 31 06:19:09 compute-0 podman[244799]: 2026-01-31 06:19:09.537541692 +0000 UTC m=+1.070595619 container remove ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:19:09 compute-0 systemd[1]: libpod-conmon-ce311c660733b51a5dbd02908e7a6e647231cb81c738105f6a7c15ffb6bb5b08.scope: Deactivated successfully.
Jan 31 06:19:09 compute-0 sudo[244721]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:19:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:19:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:09 compute-0 sudo[244914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:19:09 compute-0 sudo[244914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:19:09 compute-0 sudo[244914]: pam_unix(sudo:session): session closed for user root
Jan 31 06:19:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:19:10 compute-0 ceph-mon[75251]: pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:12 compute-0 ceph-mon[75251]: pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:14 compute-0 ceph-mon[75251]: pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:16 compute-0 ceph-mon[75251]: pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:18 compute-0 podman[244940]: 2026-01-31 06:19:18.138873175 +0000 UTC m=+0.055426870 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 06:19:18 compute-0 podman[244939]: 2026-01-31 06:19:18.170109585 +0000 UTC m=+0.086612368 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:19:18 compute-0 ceph-mon[75251]: pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:20 compute-0 ceph-mon[75251]: pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 06:19:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4186895155' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:19:20 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14380 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:19:20 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:19:20 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:19:21 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/4186895155' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:19:21 compute-0 ceph-mon[75251]: from='client.14380 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:19:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:22 compute-0 ceph-mon[75251]: pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:24 compute-0 ceph-mon[75251]: pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:26 compute-0 ceph-mon[75251]: pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:29 compute-0 ceph-mon[75251]: pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:30 compute-0 ceph-mon[75251]: pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:32 compute-0 ceph-mon[75251]: pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:34 compute-0 ceph-mon[75251]: pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:36 compute-0 ceph-mon[75251]: pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:38 compute-0 ceph-mon[75251]: pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:41 compute-0 ceph-mon[75251]: pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:42 compute-0 ceph-mon[75251]: pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.576769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382576838, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2013, "num_deletes": 252, "total_data_size": 3443617, "memory_usage": 3507256, "flush_reason": "Manual Compaction"}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382617893, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1942553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16492, "largest_seqno": 18504, "table_properties": {"data_size": 1936135, "index_size": 3301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16306, "raw_average_key_size": 20, "raw_value_size": 1921829, "raw_average_value_size": 2384, "num_data_blocks": 153, "num_entries": 806, "num_filter_entries": 806, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840156, "oldest_key_time": 1769840156, "file_creation_time": 1769840382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 41223 microseconds, and 5935 cpu microseconds.
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.617990) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1942553 bytes OK
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.618029) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.626792) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.626845) EVENT_LOG_v1 {"time_micros": 1769840382626831, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.626885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3435178, prev total WAL file size 3435178, number of live WAL files 2.
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.628195) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353034' seq:72057594037927935, type:22 .. '6D67727374617400373537' seq:0, type:0; will stop at (end)
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1897KB)], [38(7869KB)]
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382628282, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10001117, "oldest_snapshot_seqno": -1}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4484 keys, 8070758 bytes, temperature: kUnknown
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382765175, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8070758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8039873, "index_size": 18551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 108206, "raw_average_key_size": 24, "raw_value_size": 7958031, "raw_average_value_size": 1774, "num_data_blocks": 788, "num_entries": 4484, "num_filter_entries": 4484, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840382, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.765403) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8070758 bytes
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.767918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.0 rd, 58.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.7 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.3) write-amplify(4.2) OK, records in: 4892, records dropped: 408 output_compression: NoCompression
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.767948) EVENT_LOG_v1 {"time_micros": 1769840382767934, "job": 18, "event": "compaction_finished", "compaction_time_micros": 136950, "compaction_time_cpu_micros": 24660, "output_level": 6, "num_output_files": 1, "total_output_size": 8070758, "num_input_records": 4892, "num_output_records": 4484, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382768534, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840382769947, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.628060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.770009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.770021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.770023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.770025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:42 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:42.770027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.978 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.978 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.979 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.979 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:19:43 compute-0 nova_compute[239679]: 2026-01-31 06:19:43.979 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:19:44 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:19:44 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232078378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:19:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:19:44
Jan 31 06:19:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:19:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:19:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.meta']
Jan 31 06:19:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:19:44 compute-0 nova_compute[239679]: 2026-01-31 06:19:44.477 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:19:44 compute-0 nova_compute[239679]: 2026-01-31 06:19:44.623 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:19:44 compute-0 nova_compute[239679]: 2026-01-31 06:19:44.624 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:19:44 compute-0 nova_compute[239679]: 2026-01-31 06:19:44.624 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:19:44 compute-0 nova_compute[239679]: 2026-01-31 06:19:44.625 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:19:44 compute-0 ceph-mon[75251]: pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:44 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1232078378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:19:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 06:19:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532054522' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14384 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:19:45 compute-0 nova_compute[239679]: 2026-01-31 06:19:45.711 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:19:45 compute-0 nova_compute[239679]: 2026-01-31 06:19:45.712 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:19:45 compute-0 nova_compute[239679]: 2026-01-31 06:19:45.735 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:19:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:45 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2532054522' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.897309) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840385897353, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 285, "num_deletes": 251, "total_data_size": 70719, "memory_usage": 77512, "flush_reason": "Manual Compaction"}
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840385899716, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 70326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18505, "largest_seqno": 18789, "table_properties": {"data_size": 68419, "index_size": 135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4944, "raw_average_key_size": 18, "raw_value_size": 64692, "raw_average_value_size": 239, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840383, "oldest_key_time": 1769840383, "file_creation_time": 1769840385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 2434 microseconds, and 798 cpu microseconds.
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.899747) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 70326 bytes OK
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.899764) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.901515) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.901532) EVENT_LOG_v1 {"time_micros": 1769840385901528, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.901551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 68593, prev total WAL file size 68593, number of live WAL files 2.
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.902219) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(68KB)], [41(7881KB)]
Jan 31 06:19:45 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840385902290, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8141084, "oldest_snapshot_seqno": -1}
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4245 keys, 6366135 bytes, temperature: kUnknown
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840386081577, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6366135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6338531, "index_size": 15911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103931, "raw_average_key_size": 24, "raw_value_size": 6262486, "raw_average_value_size": 1475, "num_data_blocks": 668, "num_entries": 4245, "num_filter_entries": 4245, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.081994) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6366135 bytes
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.127389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.4 rd, 35.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.7 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(206.3) write-amplify(90.5) OK, records in: 4754, records dropped: 509 output_compression: NoCompression
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.127438) EVENT_LOG_v1 {"time_micros": 1769840386127419, "job": 20, "event": "compaction_finished", "compaction_time_micros": 179414, "compaction_time_cpu_micros": 23548, "output_level": 6, "num_output_files": 1, "total_output_size": 6366135, "num_input_records": 4754, "num_output_records": 4245, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840386127905, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840386129037, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:45.902040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.129162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.129170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.129172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.129174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:19:46.129176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:19:46 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:19:46 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065383151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:19:46 compute-0 nova_compute[239679]: 2026-01-31 06:19:46.339 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:19:46 compute-0 nova_compute[239679]: 2026-01-31 06:19:46.346 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:19:46 compute-0 nova_compute[239679]: 2026-01-31 06:19:46.616 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:19:46 compute-0 nova_compute[239679]: 2026-01-31 06:19:46.620 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:19:46 compute-0 nova_compute[239679]: 2026-01-31 06:19:46.621 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:19:47 compute-0 ceph-mon[75251]: from='client.14384 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:19:47 compute-0 ceph-mon[75251]: pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:47 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4065383151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:19:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:47 compute-0 nova_compute[239679]: 2026-01-31 06:19:47.622 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:47 compute-0 nova_compute[239679]: 2026-01-31 06:19:47.623 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:47 compute-0 nova_compute[239679]: 2026-01-31 06:19:47.624 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:19:47 compute-0 nova_compute[239679]: 2026-01-31 06:19:47.624 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:19:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.038 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.039 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.040 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.040 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.040 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.041 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:19:48 compute-0 ceph-mon[75251]: pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:48 compute-0 nova_compute[239679]: 2026-01-31 06:19:48.509 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:19:49 compute-0 podman[245024]: 2026-01-31 06:19:49.141123873 +0000 UTC m=+0.062574894 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 06:19:49 compute-0 podman[245023]: 2026-01-31 06:19:49.164421047 +0000 UTC m=+0.086356532 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 06:19:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:19:50.212 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:19:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:19:50.213 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:19:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:19:50.213 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:19:50 compute-0 ceph-mon[75251]: pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:53 compute-0 ceph-mon[75251]: pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:19:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/460977468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:19:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:19:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/460977468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:19:54 compute-0 ceph-mon[75251]: pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/460977468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:19:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/460977468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:19:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:19:56 compute-0 ceph-mon[75251]: pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:19:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:58 compute-0 ceph-mon[75251]: pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:19:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:00 compute-0 ceph-mon[75251]: pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:03 compute-0 ceph-mon[75251]: pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:04 compute-0 ceph-mon[75251]: pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:06 compute-0 ceph-mon[75251]: pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:08 compute-0 ceph-mon[75251]: pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:09 compute-0 sudo[245070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:20:09 compute-0 sudo[245070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:09 compute-0 sudo[245070]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:09 compute-0 sudo[245095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:20:09 compute-0 sudo[245095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:10 compute-0 sudo[245095]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:20:10 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:20:10 compute-0 sudo[245151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:20:10 compute-0 sudo[245151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:10 compute-0 sudo[245151]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:10 compute-0 sudo[245176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:20:10 compute-0 sudo[245176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.645868576 +0000 UTC m=+0.062200733 container create 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:20:10 compute-0 systemd[1]: Started libpod-conmon-7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29.scope.
Jan 31 06:20:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.620622577 +0000 UTC m=+0.036954774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.725424623 +0000 UTC m=+0.141756820 container init 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.730895218 +0000 UTC m=+0.147227405 container start 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.734498811 +0000 UTC m=+0.150830968 container attach 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:20:10 compute-0 systemd[1]: libpod-7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29.scope: Deactivated successfully.
Jan 31 06:20:10 compute-0 intelligent_mclean[245231]: 167 167
Jan 31 06:20:10 compute-0 conmon[245231]: conmon 7612c4b817c188ae5d14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29.scope/container/memory.events
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.739571796 +0000 UTC m=+0.155903953 container died 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-525726049f138344506bdbd048850f9fbe4c5695cd53130dab05b8d76cbb2a22-merged.mount: Deactivated successfully.
Jan 31 06:20:10 compute-0 podman[245214]: 2026-01-31 06:20:10.781299974 +0000 UTC m=+0.197632121 container remove 7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mclean, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:20:10 compute-0 systemd[1]: libpod-conmon-7612c4b817c188ae5d143591087e7adbc3b3e679d7103451b3130a6713c87d29.scope: Deactivated successfully.
Jan 31 06:20:10 compute-0 ceph-mon[75251]: pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:20:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:20:10 compute-0 podman[245253]: 2026-01-31 06:20:10.916900877 +0000 UTC m=+0.050117178 container create c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:20:10 compute-0 systemd[1]: Started libpod-conmon-c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3.scope.
Jan 31 06:20:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:10 compute-0 podman[245253]: 2026-01-31 06:20:10.891639528 +0000 UTC m=+0.024855899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:11 compute-0 podman[245253]: 2026-01-31 06:20:11.010500354 +0000 UTC m=+0.143716725 container init c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 06:20:11 compute-0 podman[245253]: 2026-01-31 06:20:11.021409815 +0000 UTC m=+0.154626096 container start c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:20:11 compute-0 podman[245253]: 2026-01-31 06:20:11.025666736 +0000 UTC m=+0.158883057 container attach c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 06:20:11 compute-0 stoic_shockley[245270]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:20:11 compute-0 stoic_shockley[245270]: --> All data devices are unavailable
Jan 31 06:20:11 compute-0 systemd[1]: libpod-c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3.scope: Deactivated successfully.
Jan 31 06:20:11 compute-0 podman[245253]: 2026-01-31 06:20:11.473241696 +0000 UTC m=+0.606457977 container died c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 06:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6ac2f3f5d4f73207d1396faf6bb667ee2f7e42573c025929ac1e8660bc79fbe-merged.mount: Deactivated successfully.
Jan 31 06:20:11 compute-0 podman[245253]: 2026-01-31 06:20:11.545465034 +0000 UTC m=+0.678681315 container remove c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 06:20:11 compute-0 systemd[1]: libpod-conmon-c1a6453827e1d228e28a4adf90bdd6eb18fe92d2e2aff86ccecc15038f479db3.scope: Deactivated successfully.
Jan 31 06:20:11 compute-0 sudo[245176]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:11 compute-0 sudo[245304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:20:11 compute-0 sudo[245304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:11 compute-0 sudo[245304]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:11 compute-0 sudo[245329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:20:11 compute-0 sudo[245329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:11 compute-0 podman[245366]: 2026-01-31 06:20:11.959503029 +0000 UTC m=+0.032899288 container create ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 06:20:11 compute-0 systemd[1]: Started libpod-conmon-ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7.scope.
Jan 31 06:20:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:12.016225455 +0000 UTC m=+0.089621744 container init ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:12.025057777 +0000 UTC m=+0.098454046 container start ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:20:12 compute-0 reverent_poitras[245382]: 167 167
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:12.028826984 +0000 UTC m=+0.102223293 container attach ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:20:12 compute-0 systemd[1]: libpod-ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7.scope: Deactivated successfully.
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:12.029299577 +0000 UTC m=+0.102695846 container died ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:11.944467051 +0000 UTC m=+0.017863340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ffe7c065d8afbb62adc816b079d1ee5ecd4910073b8e7adb7faedc94d3e2e9-merged.mount: Deactivated successfully.
Jan 31 06:20:12 compute-0 podman[245366]: 2026-01-31 06:20:12.063938984 +0000 UTC m=+0.137335243 container remove ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:20:12 compute-0 systemd[1]: libpod-conmon-ca862d75ab2771a4ae95a88727aad86753ef9a2953bccf748c7a4601c83366b7.scope: Deactivated successfully.
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.193549427 +0000 UTC m=+0.045568860 container create cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 06:20:12 compute-0 systemd[1]: Started libpod-conmon-cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579.scope.
Jan 31 06:20:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3f4d4573624f82ea379aeaaf534db3037b681c9914f95f55bf802a136be2c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3f4d4573624f82ea379aeaaf534db3037b681c9914f95f55bf802a136be2c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3f4d4573624f82ea379aeaaf534db3037b681c9914f95f55bf802a136be2c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3f4d4573624f82ea379aeaaf534db3037b681c9914f95f55bf802a136be2c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.170705976 +0000 UTC m=+0.022725449 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.2705479 +0000 UTC m=+0.122567353 container init cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.277504168 +0000 UTC m=+0.129523601 container start cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.280964937 +0000 UTC m=+0.132984380 container attach cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]: {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     "0": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "devices": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "/dev/loop3"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             ],
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_name": "ceph_lv0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_size": "21470642176",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "name": "ceph_lv0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "tags": {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_name": "ceph",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.crush_device_class": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.encrypted": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.objectstore": "bluestore",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_id": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.vdo": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.with_tpm": "0"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             },
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "vg_name": "ceph_vg0"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         }
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     ],
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     "1": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "devices": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "/dev/loop4"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             ],
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_name": "ceph_lv1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_size": "21470642176",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "name": "ceph_lv1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "tags": {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_name": "ceph",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.crush_device_class": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.encrypted": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.objectstore": "bluestore",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_id": "1",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.vdo": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.with_tpm": "0"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             },
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "vg_name": "ceph_vg1"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         }
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     ],
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     "2": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "devices": [
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "/dev/loop5"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             ],
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_name": "ceph_lv2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_size": "21470642176",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "name": "ceph_lv2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "tags": {
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.cluster_name": "ceph",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.crush_device_class": "",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.encrypted": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.objectstore": "bluestore",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osd_id": "2",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.vdo": "0",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:                 "ceph.with_tpm": "0"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             },
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "type": "block",
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:             "vg_name": "ceph_vg2"
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:         }
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]:     ]
Jan 31 06:20:12 compute-0 fervent_bardeen[245424]: }
Jan 31 06:20:12 compute-0 systemd[1]: libpod-cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579.scope: Deactivated successfully.
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.563541087 +0000 UTC m=+0.415560520 container died cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:20:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-da3f4d4573624f82ea379aeaaf534db3037b681c9914f95f55bf802a136be2c2-merged.mount: Deactivated successfully.
Jan 31 06:20:12 compute-0 podman[245407]: 2026-01-31 06:20:12.901374861 +0000 UTC m=+0.753394334 container remove cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:20:12 compute-0 ceph-mon[75251]: pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:12 compute-0 systemd[1]: libpod-conmon-cbbc5651f16c04915153ef84b3a62b99ebef4b50f2a7f68b9d2236762f18d579.scope: Deactivated successfully.
Jan 31 06:20:12 compute-0 sudo[245329]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:13 compute-0 sudo[245445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:20:13 compute-0 sudo[245445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:13 compute-0 sudo[245445]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:13 compute-0 sudo[245470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:20:13 compute-0 sudo[245470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.358882944 +0000 UTC m=+0.045610090 container create d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:20:13 compute-0 systemd[1]: Started libpod-conmon-d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c.scope.
Jan 31 06:20:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.33838516 +0000 UTC m=+0.025112286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.437297748 +0000 UTC m=+0.124024874 container init d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.444861813 +0000 UTC m=+0.131588929 container start d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:20:13 compute-0 systemd[1]: libpod-d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c.scope: Deactivated successfully.
Jan 31 06:20:13 compute-0 magical_jemison[245523]: 167 167
Jan 31 06:20:13 compute-0 conmon[245523]: conmon d5449b4ee1a2796d6215 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c.scope/container/memory.events
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.450617607 +0000 UTC m=+0.137344863 container attach d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.451875203 +0000 UTC m=+0.138602309 container died d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b94ae3ee7d545d83bc58e551433d6d01d3ae6982fe5c98531803ea453b039b65-merged.mount: Deactivated successfully.
Jan 31 06:20:13 compute-0 podman[245507]: 2026-01-31 06:20:13.494744174 +0000 UTC m=+0.181471280 container remove d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jemison, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:20:13 compute-0 systemd[1]: libpod-conmon-d5449b4ee1a2796d621563a54a98a30cfed49b21d8d8aaca205f35839585f90c.scope: Deactivated successfully.
Jan 31 06:20:13 compute-0 podman[245548]: 2026-01-31 06:20:13.626680093 +0000 UTC m=+0.035176933 container create b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:20:13 compute-0 systemd[1]: Started libpod-conmon-b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950.scope.
Jan 31 06:20:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf9d705a812827f2e12cd6ce27e51935d8294b41d6fcedad61a4730803a0e14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf9d705a812827f2e12cd6ce27e51935d8294b41d6fcedad61a4730803a0e14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf9d705a812827f2e12cd6ce27e51935d8294b41d6fcedad61a4730803a0e14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddf9d705a812827f2e12cd6ce27e51935d8294b41d6fcedad61a4730803a0e14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:20:13 compute-0 podman[245548]: 2026-01-31 06:20:13.612326284 +0000 UTC m=+0.020823154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:20:13 compute-0 podman[245548]: 2026-01-31 06:20:13.716871522 +0000 UTC m=+0.125368392 container init b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:20:13 compute-0 podman[245548]: 2026-01-31 06:20:13.72239863 +0000 UTC m=+0.130895470 container start b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:20:13 compute-0 podman[245548]: 2026-01-31 06:20:13.726140746 +0000 UTC m=+0.134637586 container attach b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:20:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:14 compute-0 lvm[245644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:20:14 compute-0 lvm[245645]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:20:14 compute-0 lvm[245645]: VG ceph_vg1 finished
Jan 31 06:20:14 compute-0 lvm[245644]: VG ceph_vg0 finished
Jan 31 06:20:14 compute-0 lvm[245647]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:20:14 compute-0 lvm[245647]: VG ceph_vg2 finished
Jan 31 06:20:14 compute-0 vigilant_chaplygin[245565]: {}
Jan 31 06:20:14 compute-0 systemd[1]: libpod-b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950.scope: Deactivated successfully.
Jan 31 06:20:14 compute-0 systemd[1]: libpod-b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950.scope: Consumed 1.226s CPU time.
Jan 31 06:20:14 compute-0 podman[245548]: 2026-01-31 06:20:14.573788204 +0000 UTC m=+0.982285044 container died b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddf9d705a812827f2e12cd6ce27e51935d8294b41d6fcedad61a4730803a0e14-merged.mount: Deactivated successfully.
Jan 31 06:20:14 compute-0 podman[245548]: 2026-01-31 06:20:14.613391412 +0000 UTC m=+1.021888252 container remove b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:20:14 compute-0 systemd[1]: libpod-conmon-b801bd7df504c4224a0586eb49c995c6e118d0fa622d94ac4c6260b3914ec950.scope: Deactivated successfully.
Jan 31 06:20:14 compute-0 sudo[245470]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:20:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:20:14 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:14 compute-0 sudo[245661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:20:14 compute-0 sudo[245661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:20:14 compute-0 sudo[245661]: pam_unix(sudo:session): session closed for user root
Jan 31 06:20:14 compute-0 ceph-mon[75251]: pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:16 compute-0 ceph-mon[75251]: pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:18 compute-0 ceph-mon[75251]: pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:20 compute-0 podman[245686]: 2026-01-31 06:20:20.141820014 +0000 UTC m=+0.066892085 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 06:20:20 compute-0 podman[245687]: 2026-01-31 06:20:20.144875342 +0000 UTC m=+0.069394977 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:20:20 compute-0 ceph-mon[75251]: pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:22 compute-0 ceph-mon[75251]: pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:24 compute-0 ceph-mon[75251]: pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:25 compute-0 ceph-mon[75251]: pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:28 compute-0 ceph-mon[75251]: pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:30 compute-0 ceph-mon[75251]: pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:32 compute-0 ceph-mon[75251]: pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:34 compute-0 ceph-mon[75251]: pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:36 compute-0 ceph-mon[75251]: pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:38 compute-0 ceph-mon[75251]: pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:40 compute-0 ceph-mon[75251]: pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:42 compute-0 ceph-mon[75251]: pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:20:44
Jan 31 06:20:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:20:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:20:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.control', 'vms']
Jan 31 06:20:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:20:44 compute-0 nova_compute[239679]: 2026-01-31 06:20:44.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:44 compute-0 nova_compute[239679]: 2026-01-31 06:20:44.508 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:20:44 compute-0 ceph-mon[75251]: pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:20:45 compute-0 nova_compute[239679]: 2026-01-31 06:20:45.505 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:45 compute-0 nova_compute[239679]: 2026-01-31 06:20:45.578 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:45 compute-0 nova_compute[239679]: 2026-01-31 06:20:45.578 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:46 compute-0 nova_compute[239679]: 2026-01-31 06:20:46.907 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:20:46 compute-0 nova_compute[239679]: 2026-01-31 06:20:46.907 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:20:46 compute-0 nova_compute[239679]: 2026-01-31 06:20:46.907 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:20:46 compute-0 nova_compute[239679]: 2026-01-31 06:20:46.907 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:20:46 compute-0 nova_compute[239679]: 2026-01-31 06:20:46.908 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:20:46 compute-0 ceph-mon[75251]: pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:20:47 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788442585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:20:47 compute-0 nova_compute[239679]: 2026-01-31 06:20:47.400 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:20:47 compute-0 nova_compute[239679]: 2026-01-31 06:20:47.513 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:20:47 compute-0 nova_compute[239679]: 2026-01-31 06:20:47.515 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:20:47 compute-0 nova_compute[239679]: 2026-01-31 06:20:47.515 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:20:47 compute-0 nova_compute[239679]: 2026-01-31 06:20:47.515 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:20:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:47 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1788442585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.129 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.130 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.147 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:20:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:20:48 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610286776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.665 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.670 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.922 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.923 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:20:48 compute-0 nova_compute[239679]: 2026-01-31 06:20:48.923 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:20:48 compute-0 ceph-mon[75251]: pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:48 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2610286776' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:20:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:20:50.214 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:20:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:20:50.215 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:20:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:20:50.215 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:20:50 compute-0 nova_compute[239679]: 2026-01-31 06:20:50.854 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:50 compute-0 nova_compute[239679]: 2026-01-31 06:20:50.854 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:50 compute-0 nova_compute[239679]: 2026-01-31 06:20:50.855 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:20:50 compute-0 nova_compute[239679]: 2026-01-31 06:20:50.855 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:20:50 compute-0 ceph-mon[75251]: pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:51 compute-0 podman[245775]: 2026-01-31 06:20:51.1117503 +0000 UTC m=+0.038345724 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 06:20:51 compute-0 podman[245774]: 2026-01-31 06:20:51.163655609 +0000 UTC m=+0.086633149 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:20:51 compute-0 nova_compute[239679]: 2026-01-31 06:20:51.192 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:20:51 compute-0 nova_compute[239679]: 2026-01-31 06:20:51.192 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:51 compute-0 nova_compute[239679]: 2026-01-31 06:20:51.192 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:51 compute-0 nova_compute[239679]: 2026-01-31 06:20:51.192 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:51 compute-0 nova_compute[239679]: 2026-01-31 06:20:51.193 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:20:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:52 compute-0 ceph-mon[75251]: pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:20:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1854037161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:20:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:20:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1854037161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:20:54 compute-0 ceph-mon[75251]: pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/1854037161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:20:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/1854037161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:20:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:20:56 compute-0 ceph-mon[75251]: pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:20:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:58 compute-0 ceph-mon[75251]: pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:20:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:00 compute-0 ceph-mon[75251]: pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:02 compute-0 ceph-mon[75251]: pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:04 compute-0 ceph-mon[75251]: pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:06 compute-0 ceph-mon[75251]: pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:08 compute-0 ceph-mon[75251]: pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:10 compute-0 ceph-mon[75251]: pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:12 compute-0 ceph-mon[75251]: pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:14 compute-0 sudo[245821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:21:14 compute-0 sudo[245821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:14 compute-0 sudo[245821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:14 compute-0 sudo[245846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:21:14 compute-0 sudo[245846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:15 compute-0 sudo[245846]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:21:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:21:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:21:15 compute-0 sudo[245901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:21:15 compute-0 sudo[245901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:15 compute-0 sudo[245901]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:15 compute-0 sudo[245926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:21:15 compute-0 sudo[245926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:15 compute-0 podman[245964]: 2026-01-31 06:21:15.763138264 +0000 UTC m=+0.021136123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:15 compute-0 podman[245964]: 2026-01-31 06:21:15.910523103 +0000 UTC m=+0.168520932 container create 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:21:15 compute-0 systemd[1]: Started libpod-conmon-98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4.scope.
Jan 31 06:21:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:21:16 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:21:16 compute-0 ceph-mon[75251]: pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:16 compute-0 podman[245964]: 2026-01-31 06:21:16.2186128 +0000 UTC m=+0.476610649 container init 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:21:16 compute-0 podman[245964]: 2026-01-31 06:21:16.225798934 +0000 UTC m=+0.483796763 container start 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 06:21:16 compute-0 busy_neumann[245980]: 167 167
Jan 31 06:21:16 compute-0 systemd[1]: libpod-98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4.scope: Deactivated successfully.
Jan 31 06:21:16 compute-0 podman[245964]: 2026-01-31 06:21:16.24915166 +0000 UTC m=+0.507149489 container attach 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 06:21:16 compute-0 podman[245964]: 2026-01-31 06:21:16.250359584 +0000 UTC m=+0.508357413 container died 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 06:21:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac1ef9488fdb2e400ba75e6e8428b6e88fdf3e5f029ab056b208c972117905b-merged.mount: Deactivated successfully.
Jan 31 06:21:16 compute-0 podman[245964]: 2026-01-31 06:21:16.505830092 +0000 UTC m=+0.763827931 container remove 98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:21:16 compute-0 systemd[1]: libpod-conmon-98541ede03a812d7e637c3a98f6ab4eb5c9ea5b5ebf25960a85b2aa23407f9e4.scope: Deactivated successfully.
Jan 31 06:21:16 compute-0 podman[246004]: 2026-01-31 06:21:16.593316294 +0000 UTC m=+0.018792816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:16 compute-0 podman[246004]: 2026-01-31 06:21:16.82271908 +0000 UTC m=+0.248195582 container create b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:21:16 compute-0 systemd[1]: Started libpod-conmon-b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e.scope.
Jan 31 06:21:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:17 compute-0 podman[246004]: 2026-01-31 06:21:17.03971759 +0000 UTC m=+0.465194122 container init b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:21:17 compute-0 podman[246004]: 2026-01-31 06:21:17.045400192 +0000 UTC m=+0.470876694 container start b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 06:21:17 compute-0 podman[246004]: 2026-01-31 06:21:17.130558428 +0000 UTC m=+0.556034960 container attach b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:21:17 compute-0 distracted_jemison[246020]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:21:17 compute-0 distracted_jemison[246020]: --> All data devices are unavailable
Jan 31 06:21:17 compute-0 systemd[1]: libpod-b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e.scope: Deactivated successfully.
Jan 31 06:21:17 compute-0 podman[246004]: 2026-01-31 06:21:17.447777715 +0000 UTC m=+0.873254237 container died b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:21:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ca3b9267c5fcff2a8482e673ffb699253ee0a2db8a79ac5a190fc62cc33843-merged.mount: Deactivated successfully.
Jan 31 06:21:17 compute-0 podman[246004]: 2026-01-31 06:21:17.816261383 +0000 UTC m=+1.241737885 container remove b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:21:17 compute-0 systemd[1]: libpod-conmon-b823337db8b54b0ccb08efa719c4497188315f8cd7cdd061eb8fc67436dfd07e.scope: Deactivated successfully.
Jan 31 06:21:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:17 compute-0 sudo[245926]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:17 compute-0 sudo[246054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:21:17 compute-0 sudo[246054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:17 compute-0 sudo[246054]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:17 compute-0 sudo[246079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:21:17 compute-0 sudo[246079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.185559763 +0000 UTC m=+0.032163447 container create c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 06:21:18 compute-0 systemd[1]: Started libpod-conmon-c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2.scope.
Jan 31 06:21:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.26724375 +0000 UTC m=+0.113847454 container init c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.171912475 +0000 UTC m=+0.018516179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.272270444 +0000 UTC m=+0.118874118 container start c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.275554397 +0000 UTC m=+0.122158101 container attach c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:21:18 compute-0 eager_franklin[246133]: 167 167
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.277641767 +0000 UTC m=+0.124245451 container died c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 06:21:18 compute-0 systemd[1]: libpod-c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2.scope: Deactivated successfully.
Jan 31 06:21:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-533805ccfe4717c9cb5249f7df7cb15b4ea6db2d6af2244d2f061106904beb1f-merged.mount: Deactivated successfully.
Jan 31 06:21:18 compute-0 podman[246117]: 2026-01-31 06:21:18.305369247 +0000 UTC m=+0.151972931 container remove c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:21:18 compute-0 systemd[1]: libpod-conmon-c1551908a2a894c4ceaa4656081ecd46f97649f3f2d9c01d6fd72ec7c05ea7e2.scope: Deactivated successfully.
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.4630872 +0000 UTC m=+0.070492770 container create b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.415606907 +0000 UTC m=+0.023012507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:18 compute-0 systemd[1]: Started libpod-conmon-b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836.scope.
Jan 31 06:21:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3728f45f26049da84f2308cb1327bd4d4ca1c3acb17ed10e625c121f65eb873/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3728f45f26049da84f2308cb1327bd4d4ca1c3acb17ed10e625c121f65eb873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3728f45f26049da84f2308cb1327bd4d4ca1c3acb17ed10e625c121f65eb873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3728f45f26049da84f2308cb1327bd4d4ca1c3acb17ed10e625c121f65eb873/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.595368678 +0000 UTC m=+0.202774258 container init b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.608368168 +0000 UTC m=+0.215773738 container start b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.630911291 +0000 UTC m=+0.238316861 container attach b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:21:18 compute-0 trusting_bose[246173]: {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     "0": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "devices": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "/dev/loop3"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             ],
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_name": "ceph_lv0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_size": "21470642176",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "name": "ceph_lv0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "tags": {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_name": "ceph",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.crush_device_class": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.encrypted": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.objectstore": "bluestore",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_id": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.vdo": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.with_tpm": "0"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             },
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "vg_name": "ceph_vg0"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         }
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     ],
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     "1": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "devices": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "/dev/loop4"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             ],
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_name": "ceph_lv1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_size": "21470642176",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "name": "ceph_lv1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "tags": {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_name": "ceph",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.crush_device_class": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.encrypted": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.objectstore": "bluestore",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_id": "1",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.vdo": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.with_tpm": "0"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             },
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "vg_name": "ceph_vg1"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         }
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     ],
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     "2": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "devices": [
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "/dev/loop5"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             ],
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_name": "ceph_lv2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_size": "21470642176",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "name": "ceph_lv2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "tags": {
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.cluster_name": "ceph",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.crush_device_class": "",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.encrypted": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.objectstore": "bluestore",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osd_id": "2",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.vdo": "0",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:                 "ceph.with_tpm": "0"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             },
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "type": "block",
Jan 31 06:21:18 compute-0 trusting_bose[246173]:             "vg_name": "ceph_vg2"
Jan 31 06:21:18 compute-0 trusting_bose[246173]:         }
Jan 31 06:21:18 compute-0 trusting_bose[246173]:     ]
Jan 31 06:21:18 compute-0 trusting_bose[246173]: }
Jan 31 06:21:18 compute-0 systemd[1]: libpod-b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836.scope: Deactivated successfully.
Jan 31 06:21:18 compute-0 podman[246157]: 2026-01-31 06:21:18.859278016 +0000 UTC m=+0.466683586 container died b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:21:19 compute-0 ceph-mon[75251]: pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3728f45f26049da84f2308cb1327bd4d4ca1c3acb17ed10e625c121f65eb873-merged.mount: Deactivated successfully.
Jan 31 06:21:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:20 compute-0 podman[246157]: 2026-01-31 06:21:20.000353593 +0000 UTC m=+1.607759173 container remove b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_bose, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:21:20 compute-0 systemd[1]: libpod-conmon-b8a1c5a114a0dac659d89e31a08b9f393695e00786d17c8dc582b760ea0f0836.scope: Deactivated successfully.
Jan 31 06:21:20 compute-0 sudo[246079]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:20 compute-0 ceph-mon[75251]: pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:20 compute-0 sudo[246196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:21:20 compute-0 sudo[246196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:20 compute-0 sudo[246196]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:20 compute-0 sudo[246221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:21:20 compute-0 sudo[246221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.471843425 +0000 UTC m=+0.060046271 container create 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:21:20 compute-0 systemd[1]: Started libpod-conmon-7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf.scope.
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.435619623 +0000 UTC m=+0.023822479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.62576304 +0000 UTC m=+0.213965956 container init 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.635130517 +0000 UTC m=+0.223333353 container start 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 06:21:20 compute-0 heuristic_babbage[246274]: 167 167
Jan 31 06:21:20 compute-0 systemd[1]: libpod-7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf.scope: Deactivated successfully.
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.646033128 +0000 UTC m=+0.234236054 container attach 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:21:20 compute-0 podman[246258]: 2026-01-31 06:21:20.646648155 +0000 UTC m=+0.234851031 container died 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:21:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-62b4863391db6ec97a60d9f3d2ea2ef38456cb3823c95a7587d809b025682940-merged.mount: Deactivated successfully.
Jan 31 06:21:21 compute-0 podman[246258]: 2026-01-31 06:21:21.004058886 +0000 UTC m=+0.592261742 container remove 7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:21:21 compute-0 systemd[1]: libpod-conmon-7915de28d908cf557f1ddb5735798ec24cbb177938563944c905165996a0dfbf.scope: Deactivated successfully.
Jan 31 06:21:21 compute-0 podman[246300]: 2026-01-31 06:21:21.188398858 +0000 UTC m=+0.065687453 container create 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:21:21 compute-0 podman[246300]: 2026-01-31 06:21:21.145801374 +0000 UTC m=+0.023089999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:21:21 compute-0 systemd[1]: Started libpod-conmon-8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735.scope.
Jan 31 06:21:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b1797bec55d88adf0e07e7138e0bfc4f3f99846105c1d604275671bbd305eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b1797bec55d88adf0e07e7138e0bfc4f3f99846105c1d604275671bbd305eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b1797bec55d88adf0e07e7138e0bfc4f3f99846105c1d604275671bbd305eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b1797bec55d88adf0e07e7138e0bfc4f3f99846105c1d604275671bbd305eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:21:21 compute-0 podman[246300]: 2026-01-31 06:21:21.363935538 +0000 UTC m=+0.241224153 container init 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:21:21 compute-0 podman[246300]: 2026-01-31 06:21:21.369099845 +0000 UTC m=+0.246388440 container start 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:21:21 compute-0 podman[246300]: 2026-01-31 06:21:21.456408983 +0000 UTC m=+0.333697578 container attach 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:21:21 compute-0 podman[246315]: 2026-01-31 06:21:21.501848437 +0000 UTC m=+0.271977249 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent)
Jan 31 06:21:21 compute-0 podman[246314]: 2026-01-31 06:21:21.50264188 +0000 UTC m=+0.271627689 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 06:21:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:22 compute-0 lvm[246442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:21:22 compute-0 lvm[246442]: VG ceph_vg1 finished
Jan 31 06:21:22 compute-0 lvm[246441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:21:22 compute-0 lvm[246441]: VG ceph_vg0 finished
Jan 31 06:21:22 compute-0 lvm[246444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:21:22 compute-0 lvm[246444]: VG ceph_vg2 finished
Jan 31 06:21:22 compute-0 ceph-mon[75251]: pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:22 compute-0 busy_lamport[246353]: {}
Jan 31 06:21:22 compute-0 systemd[1]: libpod-8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735.scope: Deactivated successfully.
Jan 31 06:21:22 compute-0 systemd[1]: libpod-8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735.scope: Consumed 1.159s CPU time.
Jan 31 06:21:22 compute-0 podman[246300]: 2026-01-31 06:21:22.197887826 +0000 UTC m=+1.075176441 container died 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 06:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-58b1797bec55d88adf0e07e7138e0bfc4f3f99846105c1d604275671bbd305eb-merged.mount: Deactivated successfully.
Jan 31 06:21:22 compute-0 podman[246300]: 2026-01-31 06:21:22.347824787 +0000 UTC m=+1.225113392 container remove 8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lamport, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:21:22 compute-0 systemd[1]: libpod-conmon-8a04cc95b49162bb277e680083383f7734ce7ac14507b69562e12c6de14b7735.scope: Deactivated successfully.
Jan 31 06:21:22 compute-0 sudo[246221]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:21:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:21:22 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:22 compute-0 sudo[246460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:21:22 compute-0 sudo[246460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:21:22 compute-0 sudo[246460]: pam_unix(sudo:session): session closed for user root
Jan 31 06:21:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.596946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482597023, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1018, "num_deletes": 256, "total_data_size": 1476467, "memory_usage": 1501968, "flush_reason": "Manual Compaction"}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482610526, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1452251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18790, "largest_seqno": 19807, "table_properties": {"data_size": 1447276, "index_size": 2499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10272, "raw_average_key_size": 18, "raw_value_size": 1437274, "raw_average_value_size": 2608, "num_data_blocks": 114, "num_entries": 551, "num_filter_entries": 551, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840387, "oldest_key_time": 1769840387, "file_creation_time": 1769840482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13646 microseconds, and 5159 cpu microseconds.
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.610604) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1452251 bytes OK
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.610639) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.612835) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.612857) EVENT_LOG_v1 {"time_micros": 1769840482612851, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.612884) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1471629, prev total WAL file size 1471629, number of live WAL files 2.
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.613465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1418KB)], [44(6216KB)]
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482613801, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7818386, "oldest_snapshot_seqno": -1}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4272 keys, 7693426 bytes, temperature: kUnknown
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482683157, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7693426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7663707, "index_size": 17961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 105571, "raw_average_key_size": 24, "raw_value_size": 7585235, "raw_average_value_size": 1775, "num_data_blocks": 754, "num_entries": 4272, "num_filter_entries": 4272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.683535) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7693426 bytes
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.685578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.7 rd, 110.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.7) write-amplify(5.3) OK, records in: 4796, records dropped: 524 output_compression: NoCompression
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.685596) EVENT_LOG_v1 {"time_micros": 1769840482685586, "job": 22, "event": "compaction_finished", "compaction_time_micros": 69387, "compaction_time_cpu_micros": 24579, "output_level": 6, "num_output_files": 1, "total_output_size": 7693426, "num_input_records": 4796, "num_output_records": 4272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482685876, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840482686835, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.613317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.686866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.686871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.686873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.686874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:22 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:21:22.686876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:21:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:21:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:24 compute-0 ceph-mon[75251]: pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:26 compute-0 ceph-mon[75251]: pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:29 compute-0 ceph-mon[75251]: pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:30 compute-0 ceph-mon[75251]: pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:31 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:31.189 155105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:bb:3d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'fa:c7:e7:36:17:de'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 06:21:31 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:31.192 155105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 06:21:31 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:31.193 155105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bf4b4a34-237c-4fe2-88ca-4e5346644b6b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 06:21:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:32 compute-0 ceph-mon[75251]: pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:35 compute-0 ceph-mon[75251]: pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:36 compute-0 ceph-mon[75251]: pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:38 compute-0 ceph-mon[75251]: pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:40 compute-0 ceph-mon[75251]: pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:42 compute-0 ceph-mon[75251]: pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:44 compute-0 ceph-mon[75251]: pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:21:44
Jan 31 06:21:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:21:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:21:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', '.mgr']
Jan 31 06:21:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:21:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:46 compute-0 nova_compute[239679]: 2026-01-31 06:21:46.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:46 compute-0 nova_compute[239679]: 2026-01-31 06:21:46.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:46 compute-0 nova_compute[239679]: 2026-01-31 06:21:46.508 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:21:46 compute-0 ceph-mon[75251]: pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.716 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.717 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.717 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.718 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:21:47 compute-0 nova_compute[239679]: 2026-01-31 06:21:47.718 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:21:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:21:48 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987662965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:21:48 compute-0 nova_compute[239679]: 2026-01-31 06:21:48.212 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:21:48 compute-0 nova_compute[239679]: 2026-01-31 06:21:48.375 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:21:48 compute-0 nova_compute[239679]: 2026-01-31 06:21:48.376 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:21:48 compute-0 nova_compute[239679]: 2026-01-31 06:21:48.377 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:21:48 compute-0 nova_compute[239679]: 2026-01-31 06:21:48.377 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:21:49 compute-0 ceph-mon[75251]: pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:49 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2987662965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:21:49 compute-0 nova_compute[239679]: 2026-01-31 06:21:49.685 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:21:49 compute-0 nova_compute[239679]: 2026-01-31 06:21:49.685 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:21:49 compute-0 nova_compute[239679]: 2026-01-31 06:21:49.705 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:21:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:50 compute-0 ceph-mon[75251]: pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:21:50 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831845241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:21:50 compute-0 nova_compute[239679]: 2026-01-31 06:21:50.206 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:21:50 compute-0 nova_compute[239679]: 2026-01-31 06:21:50.212 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:21:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:50.216 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:21:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:50.217 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:21:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:21:50.217 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:21:50 compute-0 nova_compute[239679]: 2026-01-31 06:21:50.318 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:21:50 compute-0 nova_compute[239679]: 2026-01-31 06:21:50.319 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:21:50 compute-0 nova_compute[239679]: 2026-01-31 06:21:50.320 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:21:51 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3831845241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.321 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.322 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.323 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.324 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.487 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.488 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.488 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:51 compute-0 nova_compute[239679]: 2026-01-31 06:21:51.488 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:52 compute-0 podman[246531]: 2026-01-31 06:21:52.122960966 +0000 UTC m=+0.048159184 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 06:21:52 compute-0 podman[246530]: 2026-01-31 06:21:52.172005203 +0000 UTC m=+0.096555942 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:21:52 compute-0 ceph-mon[75251]: pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:52 compute-0 nova_compute[239679]: 2026-01-31 06:21:52.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:21:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:54 compute-0 ceph-mon[75251]: pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:21:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598188024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:21:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:21:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598188024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:21:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2598188024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:21:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2598188024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:21:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:21:56 compute-0 ceph-mon[75251]: pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:21:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:58 compute-0 ceph-mon[75251]: pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:21:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:00 compute-0 ceph-mon[75251]: pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:02 compute-0 ceph-mon[75251]: pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:04 compute-0 ceph-mon[75251]: pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:06 compute-0 ceph-mon[75251]: pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:08 compute-0 ceph-mon[75251]: pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:10 compute-0 ceph-mon[75251]: pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:13 compute-0 ceph-mon[75251]: pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:14 compute-0 ceph-mon[75251]: pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:16 compute-0 ceph-mon[75251]: pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:18 compute-0 ceph-mon[75251]: pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:20 compute-0 ceph-mon[75251]: pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:22 compute-0 ceph-mon[75251]: pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:22 compute-0 sudo[246575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:22:22 compute-0 sudo[246575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:22 compute-0 sudo[246575]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:22 compute-0 sudo[246612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:22:22 compute-0 sudo[246612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:22 compute-0 podman[246600]: 2026-01-31 06:22:22.68846906 +0000 UTC m=+0.067790732 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 06:22:22 compute-0 podman[246599]: 2026-01-31 06:22:22.70498305 +0000 UTC m=+0.090387306 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 06:22:23 compute-0 sudo[246612]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:22:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:22:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:22:23 compute-0 sudo[246702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:22:23 compute-0 sudo[246702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:23 compute-0 sudo[246702]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:23 compute-0 sudo[246727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:22:23 compute-0 sudo[246727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.578476584 +0000 UTC m=+0.018199699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.725464021 +0000 UTC m=+0.165187176 container create f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 06:22:23 compute-0 systemd[1]: Started libpod-conmon-f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d.scope.
Jan 31 06:22:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.932391006 +0000 UTC m=+0.372114161 container init f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.937916334 +0000 UTC m=+0.377639449 container start f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:22:23 compute-0 systemd[1]: libpod-f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d.scope: Deactivated successfully.
Jan 31 06:22:23 compute-0 vigilant_mendeleev[246780]: 167 167
Jan 31 06:22:23 compute-0 conmon[246780]: conmon f1d128b91a4aa1e1ac68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d.scope/container/memory.events
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.967219178 +0000 UTC m=+0.406942293 container attach f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:22:23 compute-0 podman[246764]: 2026-01-31 06:22:23.968833594 +0000 UTC m=+0.408556709 container died f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bba7da103de8e22efb53a9776b51bb05d8aaa7b6ad9fadf7a70f36309fd64661-merged.mount: Deactivated successfully.
Jan 31 06:22:24 compute-0 podman[246764]: 2026-01-31 06:22:24.267176514 +0000 UTC m=+0.706899619 container remove f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendeleev, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:22:24 compute-0 systemd[1]: libpod-conmon-f1d128b91a4aa1e1ac684835e07061efc90ad86e8a67e7b3106cd9f6c79d647d.scope: Deactivated successfully.
Jan 31 06:22:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:22:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:22:24 compute-0 ceph-mon[75251]: pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:24 compute-0 podman[246809]: 2026-01-31 06:22:24.401236483 +0000 UTC m=+0.056161541 container create 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 06:22:24 compute-0 podman[246809]: 2026-01-31 06:22:24.365215087 +0000 UTC m=+0.020140155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:24 compute-0 systemd[1]: Started libpod-conmon-0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a.scope.
Jan 31 06:22:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:24 compute-0 podman[246809]: 2026-01-31 06:22:24.592756809 +0000 UTC m=+0.247681907 container init 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:22:24 compute-0 podman[246809]: 2026-01-31 06:22:24.598571155 +0000 UTC m=+0.253496213 container start 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:22:24 compute-0 podman[246809]: 2026-01-31 06:22:24.631085831 +0000 UTC m=+0.286010929 container attach 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:22:24 compute-0 cranky_roentgen[246825]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:22:24 compute-0 cranky_roentgen[246825]: --> All data devices are unavailable
Jan 31 06:22:25 compute-0 systemd[1]: libpod-0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a.scope: Deactivated successfully.
Jan 31 06:22:25 compute-0 podman[246809]: 2026-01-31 06:22:25.029158131 +0000 UTC m=+0.684083189 container died 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae50279ed4e5e56dbc192de0296242a0cac21c3d94323a644165f1ae8505ea13-merged.mount: Deactivated successfully.
Jan 31 06:22:25 compute-0 podman[246809]: 2026-01-31 06:22:25.322752835 +0000 UTC m=+0.977677883 container remove 0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 06:22:25 compute-0 sudo[246727]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:25 compute-0 systemd[1]: libpod-conmon-0e52c4e20e31447e53198290d9647d7b18cf88201ff02aabb4e63aecba56420a.scope: Deactivated successfully.
Jan 31 06:22:25 compute-0 sudo[246857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:22:25 compute-0 sudo[246857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:25 compute-0 sudo[246857]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:25 compute-0 sudo[246882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:22:25 compute-0 sudo[246882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.822521532 +0000 UTC m=+0.106577817 container create 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.737192961 +0000 UTC m=+0.021249296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:25 compute-0 systemd[1]: Started libpod-conmon-564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8.scope.
Jan 31 06:22:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.951858916 +0000 UTC m=+0.235915171 container init 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.957639661 +0000 UTC m=+0.241695906 container start 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:22:25 compute-0 cool_mestorf[246935]: 167 167
Jan 31 06:22:25 compute-0 systemd[1]: libpod-564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8.scope: Deactivated successfully.
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.996024495 +0000 UTC m=+0.280080740 container attach 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 06:22:25 compute-0 podman[246919]: 2026-01-31 06:22:25.996500178 +0000 UTC m=+0.280556423 container died 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f41e3d8831f06711615490eee51baac3747458eea53fc570a9df7c65f3e01479-merged.mount: Deactivated successfully.
Jan 31 06:22:26 compute-0 podman[246919]: 2026-01-31 06:22:26.535273986 +0000 UTC m=+0.819330271 container remove 564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 06:22:26 compute-0 systemd[1]: libpod-conmon-564247acbb3ac1aa743e9fb5ff8b723183246f1a6bc6b84c22f9c41e1babd9a8.scope: Deactivated successfully.
Jan 31 06:22:26 compute-0 podman[246960]: 2026-01-31 06:22:26.715735168 +0000 UTC m=+0.047197716 container create bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 06:22:26 compute-0 systemd[1]: Started libpod-conmon-bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56.scope.
Jan 31 06:22:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:26 compute-0 podman[246960]: 2026-01-31 06:22:26.6968645 +0000 UTC m=+0.028327078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b72df609dbe401579ce25d74e020c4254fefe12dcd967f94885b26c9c06c79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b72df609dbe401579ce25d74e020c4254fefe12dcd967f94885b26c9c06c79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b72df609dbe401579ce25d74e020c4254fefe12dcd967f94885b26c9c06c79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b72df609dbe401579ce25d74e020c4254fefe12dcd967f94885b26c9c06c79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:26 compute-0 podman[246960]: 2026-01-31 06:22:26.814715428 +0000 UTC m=+0.146177996 container init bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:22:26 compute-0 podman[246960]: 2026-01-31 06:22:26.821188632 +0000 UTC m=+0.152651180 container start bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 06:22:26 compute-0 podman[246960]: 2026-01-31 06:22:26.824650591 +0000 UTC m=+0.156113139 container attach bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:22:26 compute-0 ceph-mon[75251]: pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]: {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     "0": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "devices": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "/dev/loop3"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             ],
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_name": "ceph_lv0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_size": "21470642176",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "name": "ceph_lv0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "tags": {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_name": "ceph",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.crush_device_class": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.encrypted": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.objectstore": "bluestore",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_id": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.vdo": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.with_tpm": "0"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             },
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "vg_name": "ceph_vg0"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         }
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     ],
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     "1": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "devices": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "/dev/loop4"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             ],
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_name": "ceph_lv1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_size": "21470642176",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "name": "ceph_lv1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "tags": {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_name": "ceph",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.crush_device_class": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.encrypted": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.objectstore": "bluestore",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_id": "1",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.vdo": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.with_tpm": "0"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             },
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "vg_name": "ceph_vg1"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         }
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     ],
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     "2": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "devices": [
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "/dev/loop5"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             ],
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_name": "ceph_lv2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_size": "21470642176",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "name": "ceph_lv2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "tags": {
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.cluster_name": "ceph",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.crush_device_class": "",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.encrypted": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.objectstore": "bluestore",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osd_id": "2",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.vdo": "0",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:                 "ceph.with_tpm": "0"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             },
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "type": "block",
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:             "vg_name": "ceph_vg2"
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:         }
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]:     ]
Jan 31 06:22:27 compute-0 dreamy_hellman[246977]: }
Jan 31 06:22:27 compute-0 systemd[1]: libpod-bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56.scope: Deactivated successfully.
Jan 31 06:22:27 compute-0 podman[246960]: 2026-01-31 06:22:27.09421195 +0000 UTC m=+0.425674498 container died bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-19b72df609dbe401579ce25d74e020c4254fefe12dcd967f94885b26c9c06c79-merged.mount: Deactivated successfully.
Jan 31 06:22:27 compute-0 podman[246960]: 2026-01-31 06:22:27.13037689 +0000 UTC m=+0.461839438 container remove bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:22:27 compute-0 systemd[1]: libpod-conmon-bcd0d804bb897182e228287351d8f3ddd02fbef0f6986c0bc969fb4cb80adf56.scope: Deactivated successfully.
Jan 31 06:22:27 compute-0 sudo[246882]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:27 compute-0 sudo[246998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:22:27 compute-0 sudo[246998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:27 compute-0 sudo[246998]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:27 compute-0 sudo[247023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:22:27 compute-0 sudo[247023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.581014998 +0000 UTC m=+0.087090402 container create 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:22:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.517670053 +0000 UTC m=+0.023745497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:27 compute-0 systemd[1]: Started libpod-conmon-0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86.scope.
Jan 31 06:22:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.671950649 +0000 UTC m=+0.178026093 container init 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.678194186 +0000 UTC m=+0.184269590 container start 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.681962914 +0000 UTC m=+0.188038378 container attach 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 06:22:27 compute-0 goofy_herschel[247076]: 167 167
Jan 31 06:22:27 compute-0 systemd[1]: libpod-0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86.scope: Deactivated successfully.
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.68324841 +0000 UTC m=+0.189323844 container died 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:22:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8139bed7030bd5e165f51a21195dad17f4ea8ec60f7d805fcb5482d5fbf21634-merged.mount: Deactivated successfully.
Jan 31 06:22:27 compute-0 podman[247060]: 2026-01-31 06:22:27.722418926 +0000 UTC m=+0.228494340 container remove 0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_herschel, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:22:27 compute-0 systemd[1]: libpod-conmon-0d269fe632654be874057e01dae59a53d057d7f5790d507d7f80914b16817a86.scope: Deactivated successfully.
Jan 31 06:22:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:27 compute-0 podman[247099]: 2026-01-31 06:22:27.901725064 +0000 UTC m=+0.093742491 container create 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:22:27 compute-0 podman[247099]: 2026-01-31 06:22:27.827078568 +0000 UTC m=+0.019096015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:22:27 compute-0 systemd[1]: Started libpod-conmon-5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f.scope.
Jan 31 06:22:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1311a93bc1e0db9cf6f1adb47a01260f26abfbe1c4eba0639c8a9748257c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1311a93bc1e0db9cf6f1adb47a01260f26abfbe1c4eba0639c8a9748257c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1311a93bc1e0db9cf6f1adb47a01260f26abfbe1c4eba0639c8a9748257c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1311a93bc1e0db9cf6f1adb47a01260f26abfbe1c4eba0639c8a9748257c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:22:28 compute-0 podman[247099]: 2026-01-31 06:22:28.067461806 +0000 UTC m=+0.259479263 container init 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:22:28 compute-0 podman[247099]: 2026-01-31 06:22:28.073473937 +0000 UTC m=+0.265491364 container start 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:22:28 compute-0 podman[247099]: 2026-01-31 06:22:28.099289183 +0000 UTC m=+0.291306610 container attach 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:22:28 compute-0 lvm[247193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:22:28 compute-0 lvm[247194]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:22:28 compute-0 lvm[247194]: VG ceph_vg1 finished
Jan 31 06:22:28 compute-0 lvm[247193]: VG ceph_vg0 finished
Jan 31 06:22:28 compute-0 lvm[247196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:22:28 compute-0 lvm[247196]: VG ceph_vg2 finished
Jan 31 06:22:28 compute-0 jovial_joliot[247115]: {}
Jan 31 06:22:28 compute-0 systemd[1]: libpod-5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f.scope: Deactivated successfully.
Jan 31 06:22:28 compute-0 podman[247099]: 2026-01-31 06:22:28.869720491 +0000 UTC m=+1.061737918 container died 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:22:28 compute-0 systemd[1]: libpod-5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f.scope: Consumed 1.116s CPU time.
Jan 31 06:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1311a93bc1e0db9cf6f1adb47a01260f26abfbe1c4eba0639c8a9748257c1b-merged.mount: Deactivated successfully.
Jan 31 06:22:28 compute-0 podman[247099]: 2026-01-31 06:22:28.930177413 +0000 UTC m=+1.122194840 container remove 5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:22:28 compute-0 systemd[1]: libpod-conmon-5e69ffd8aafb55a6beb1e2ba48e27ae3af9c7dd696d115b5dde44834e614d02f.scope: Deactivated successfully.
Jan 31 06:22:28 compute-0 sudo[247023]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:22:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:22:28 compute-0 ceph-mon[75251]: pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:28 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:29 compute-0 sudo[247211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:22:29 compute-0 sudo[247211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:22:29 compute-0 sudo[247211]: pam_unix(sudo:session): session closed for user root
Jan 31 06:22:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:29 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:22:31 compute-0 ceph-mon[75251]: pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:32 compute-0 ceph-mon[75251]: pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:34 compute-0 ceph-mon[75251]: pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:36 compute-0 ceph-mon[75251]: pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:39 compute-0 ceph-mon[75251]: pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:40 compute-0 ceph-mon[75251]: pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:40 compute-0 nova_compute[239679]: 2026-01-31 06:22:40.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:22:40 compute-0 nova_compute[239679]: 2026-01-31 06:22:40.508 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 06:22:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:42 compute-0 ceph-mon[75251]: pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:44 compute-0 ceph-mon[75251]: pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:22:44
Jan 31 06:22:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:22:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:22:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta']
Jan 31 06:22:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:22:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:46 compute-0 ceph-mon[75251]: pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:48 compute-0 ceph-mon[75251]: pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:22:50.217 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:22:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:22:50.218 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:22:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:22:50.218 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:22:51 compute-0 ceph-mon[75251]: pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:52 compute-0 ceph-mon[75251]: pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:53 compute-0 podman[247237]: 2026-01-31 06:22:53.136760336 +0000 UTC m=+0.056490451 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 06:22:53 compute-0 podman[247236]: 2026-01-31 06:22:53.155735346 +0000 UTC m=+0.076420448 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:22:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:22:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/336213138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:22:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:22:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/336213138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:22:55 compute-0 ceph-mon[75251]: pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/336213138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:22:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/336213138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:22:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:22:56 compute-0 ceph-mon[75251]: pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:22:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:57 compute-0 sshd-session[247279]: Connection closed by 45.148.10.240 port 38590
Jan 31 06:22:59 compute-0 ceph-mon[75251]: pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:22:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:00 compute-0 ceph-mon[75251]: pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:02 compute-0 ceph-mon[75251]: pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:05 compute-0 ceph-mon[75251]: pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:07 compute-0 ceph-mon[75251]: pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:08 compute-0 ceph-mon[75251]: pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:08 compute-0 nova_compute[239679]: 2026-01-31 06:23:08.905 239684 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 20.76 sec
Jan 31 06:23:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:10 compute-0 ceph-mon[75251]: pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:13 compute-0 ceph-mon[75251]: pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:14 compute-0 ceph-mon[75251]: pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:16 compute-0 ceph-mon[75251]: pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:19 compute-0 ceph-mon[75251]: pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:20 compute-0 ceph-mon[75251]: pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:22 compute-0 ceph-mon[75251]: pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:24 compute-0 podman[247281]: 2026-01-31 06:23:24.123469584 +0000 UTC m=+0.047730449 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 06:23:24 compute-0 podman[247280]: 2026-01-31 06:23:24.171196493 +0000 UTC m=+0.093032419 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 06:23:25 compute-0 ceph-mon[75251]: pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:25 compute-0 nova_compute[239679]: 2026-01-31 06:23:25.834 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 06:23:25 compute-0 nova_compute[239679]: 2026-01-31 06:23:25.835 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:25 compute-0 nova_compute[239679]: 2026-01-31 06:23:25.835 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 06:23:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:26 compute-0 ceph-mon[75251]: pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:23:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4612 writes, 20K keys, 4612 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4612 writes, 4612 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1304 writes, 5930 keys, 1304 commit groups, 1.0 writes per commit group, ingest: 8.62 MB, 0.01 MB/s
                                           Interval WAL: 1304 writes, 1304 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.7      0.77              0.06        11    0.070       0      0       0.0       0.0
                                             L6      1/0    7.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     61.9     51.4      1.40              0.21        10    0.140     43K   5178       0.0       0.0
                                            Sum      1/0    7.34 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     40.0     43.4      2.17              0.27        21    0.103     43K   5178       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.0     54.8     54.8      0.79              0.13        10    0.079     23K   2982       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     61.9     51.4      1.40              0.21        10    0.140     43K   5178       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.8      0.76              0.06        10    0.076       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.7      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 2.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2e66f78d0#2 capacity: 308.00 MB usage: 7.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(427,6.73 MB,2.18531%) FilterBlock(22,130.36 KB,0.0413325%) IndexBlock(22,240.81 KB,0.0763534%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 06:23:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:29 compute-0 sudo[247325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:23:29 compute-0 sudo[247325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:29 compute-0 sudo[247325]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:29 compute-0 sudo[247350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:23:29 compute-0 sudo[247350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:29 compute-0 sudo[247350]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:23:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:23:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:23:29 compute-0 sudo[247407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:23:29 compute-0 sudo[247407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:29 compute-0 sudo[247407]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:29 compute-0 sudo[247432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:23:29 compute-0 sudo[247432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:29 compute-0 podman[247469]: 2026-01-31 06:23:29.936503051 +0000 UTC m=+0.082478941 container create 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 06:23:29 compute-0 podman[247469]: 2026-01-31 06:23:29.872178664 +0000 UTC m=+0.018154574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:30 compute-0 systemd[1]: Started libpod-conmon-07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081.scope.
Jan 31 06:23:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:30 compute-0 podman[247469]: 2026-01-31 06:23:30.110408293 +0000 UTC m=+0.256384203 container init 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 06:23:30 compute-0 podman[247469]: 2026-01-31 06:23:30.117793042 +0000 UTC m=+0.263768932 container start 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:23:30 compute-0 dazzling_benz[247485]: 167 167
Jan 31 06:23:30 compute-0 systemd[1]: libpod-07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081.scope: Deactivated successfully.
Jan 31 06:23:30 compute-0 podman[247469]: 2026-01-31 06:23:30.133241688 +0000 UTC m=+0.279217628 container attach 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:23:30 compute-0 podman[247469]: 2026-01-31 06:23:30.133749553 +0000 UTC m=+0.279725453 container died 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:23:30 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:23:30 compute-0 ceph-mon[75251]: pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f695952d77fbe9b5962050ad25c44693b112dfa03099214f296007aa2f1fdf64-merged.mount: Deactivated successfully.
Jan 31 06:23:30 compute-0 podman[247469]: 2026-01-31 06:23:30.326181438 +0000 UTC m=+0.472157328 container remove 07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:23:30 compute-0 systemd[1]: libpod-conmon-07b583fe5396a22954e8ee73b735655381ee0a0477ff9f914d08dfa9c6051081.scope: Deactivated successfully.
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.460507033 +0000 UTC m=+0.057723662 container create aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.420907224 +0000 UTC m=+0.018123853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:30 compute-0 systemd[1]: Started libpod-conmon-aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654.scope.
Jan 31 06:23:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.595951799 +0000 UTC m=+0.193168438 container init aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.600423335 +0000 UTC m=+0.197639964 container start aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.70468455 +0000 UTC m=+0.301901179 container attach aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:23:30 compute-0 stoic_mahavira[247528]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:23:30 compute-0 stoic_mahavira[247528]: --> All data devices are unavailable
Jan 31 06:23:30 compute-0 systemd[1]: libpod-aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654.scope: Deactivated successfully.
Jan 31 06:23:30 compute-0 podman[247511]: 2026-01-31 06:23:30.963555613 +0000 UTC m=+0.560772242 container died aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-422ee296a0c4e085b31d31eeb52e7b6a916f5ebc6e5db9e4a0ce6b2535191522-merged.mount: Deactivated successfully.
Jan 31 06:23:31 compute-0 podman[247511]: 2026-01-31 06:23:31.393855118 +0000 UTC m=+0.991071757 container remove aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 06:23:31 compute-0 systemd[1]: libpod-conmon-aaf5c55e7ed897f858d53415cca71e7da8298d9fdddcafc70adbc25c70377654.scope: Deactivated successfully.
Jan 31 06:23:31 compute-0 sudo[247432]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:31 compute-0 sudo[247559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:23:31 compute-0 sudo[247559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:31 compute-0 sudo[247559]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:31 compute-0 sudo[247584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:23:31 compute-0 sudo[247584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.803246363 +0000 UTC m=+0.040315210 container create ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:23:31 compute-0 nova_compute[239679]: 2026-01-31 06:23:31.820 239684 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 12.91 sec
Jan 31 06:23:31 compute-0 systemd[1]: Started libpod-conmon-ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0.scope.
Jan 31 06:23:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.781493708 +0000 UTC m=+0.018562575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.886544736 +0000 UTC m=+0.123613583 container init ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.891036813 +0000 UTC m=+0.128105650 container start ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:23:31 compute-0 stoic_brahmagupta[247636]: 167 167
Jan 31 06:23:31 compute-0 systemd[1]: libpod-ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0.scope: Deactivated successfully.
Jan 31 06:23:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.901415586 +0000 UTC m=+0.138484433 container attach ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.90191985 +0000 UTC m=+0.138988697 container died ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c131c8b44fac5a9d2ad8655bad8df1ed0a634a9e4d47e4d9a1af59dde11d34e-merged.mount: Deactivated successfully.
Jan 31 06:23:31 compute-0 podman[247619]: 2026-01-31 06:23:31.986958873 +0000 UTC m=+0.224027730 container remove ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:23:31 compute-0 systemd[1]: libpod-conmon-ad7f1ecc705152681153fe67713c7255e8d0aef29c20529bd4f30979c4ea6ed0.scope: Deactivated successfully.
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.103208156 +0000 UTC m=+0.041287207 container create 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:23:32 compute-0 systemd[1]: Started libpod-conmon-906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5.scope.
Jan 31 06:23:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205b8f4b024d48799a5b726cdbc1d7ef0d87d89d5e1bd6738760e217a961dfcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205b8f4b024d48799a5b726cdbc1d7ef0d87d89d5e1bd6738760e217a961dfcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205b8f4b024d48799a5b726cdbc1d7ef0d87d89d5e1bd6738760e217a961dfcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/205b8f4b024d48799a5b726cdbc1d7ef0d87d89d5e1bd6738760e217a961dfcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.081961346 +0000 UTC m=+0.020040427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:32 compute-0 nova_compute[239679]: 2026-01-31 06:23:32.177 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.196217014 +0000 UTC m=+0.134296065 container init 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.200583597 +0000 UTC m=+0.138662648 container start 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.206814843 +0000 UTC m=+0.144893894 container attach 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:23:32 compute-0 priceless_black[247680]: {
Jan 31 06:23:32 compute-0 priceless_black[247680]:     "0": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:         {
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "devices": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "/dev/loop3"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             ],
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_name": "ceph_lv0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_size": "21470642176",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "name": "ceph_lv0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "tags": {
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_name": "ceph",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.crush_device_class": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.encrypted": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.objectstore": "bluestore",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_id": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.vdo": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.with_tpm": "0"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             },
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "vg_name": "ceph_vg0"
Jan 31 06:23:32 compute-0 priceless_black[247680]:         }
Jan 31 06:23:32 compute-0 priceless_black[247680]:     ],
Jan 31 06:23:32 compute-0 priceless_black[247680]:     "1": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:         {
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "devices": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "/dev/loop4"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             ],
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_name": "ceph_lv1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_size": "21470642176",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "name": "ceph_lv1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "tags": {
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_name": "ceph",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.crush_device_class": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.encrypted": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.objectstore": "bluestore",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_id": "1",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.vdo": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.with_tpm": "0"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             },
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "vg_name": "ceph_vg1"
Jan 31 06:23:32 compute-0 priceless_black[247680]:         }
Jan 31 06:23:32 compute-0 priceless_black[247680]:     ],
Jan 31 06:23:32 compute-0 priceless_black[247680]:     "2": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:         {
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "devices": [
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "/dev/loop5"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             ],
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_name": "ceph_lv2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_size": "21470642176",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "name": "ceph_lv2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "tags": {
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.cluster_name": "ceph",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.crush_device_class": "",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.encrypted": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.objectstore": "bluestore",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osd_id": "2",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.vdo": "0",
Jan 31 06:23:32 compute-0 priceless_black[247680]:                 "ceph.with_tpm": "0"
Jan 31 06:23:32 compute-0 priceless_black[247680]:             },
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "type": "block",
Jan 31 06:23:32 compute-0 priceless_black[247680]:             "vg_name": "ceph_vg2"
Jan 31 06:23:32 compute-0 priceless_black[247680]:         }
Jan 31 06:23:32 compute-0 priceless_black[247680]:     ]
Jan 31 06:23:32 compute-0 priceless_black[247680]: }
Jan 31 06:23:32 compute-0 systemd[1]: libpod-906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5.scope: Deactivated successfully.
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.462093804 +0000 UTC m=+0.400172855 container died 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-205b8f4b024d48799a5b726cdbc1d7ef0d87d89d5e1bd6738760e217a961dfcb-merged.mount: Deactivated successfully.
Jan 31 06:23:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:32 compute-0 podman[247662]: 2026-01-31 06:23:32.69700551 +0000 UTC m=+0.635084561 container remove 906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 06:23:32 compute-0 systemd[1]: libpod-conmon-906b4e29862e95dfe65254c86c9cea4a4fafc995246aaf7206ce2a7b961447d5.scope: Deactivated successfully.
Jan 31 06:23:32 compute-0 sudo[247584]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:32 compute-0 sudo[247701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:23:32 compute-0 sudo[247701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:32 compute-0 sudo[247701]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:32 compute-0 sudo[247726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:23:32 compute-0 sudo[247726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:33 compute-0 ceph-mon[75251]: pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.078706443 +0000 UTC m=+0.024310048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.191807487 +0000 UTC m=+0.137411012 container create 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 06:23:33 compute-0 systemd[1]: Started libpod-conmon-6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c.scope.
Jan 31 06:23:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.378166372 +0000 UTC m=+0.323769907 container init 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.387463735 +0000 UTC m=+0.333067270 container start 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:23:33 compute-0 goofy_brown[247781]: 167 167
Jan 31 06:23:33 compute-0 systemd[1]: libpod-6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c.scope: Deactivated successfully.
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.428714029 +0000 UTC m=+0.374317554 container attach 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.429146001 +0000 UTC m=+0.374749506 container died 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac6fd4127e0215d894bdc205eca979749d992094ece4296e6678742cf6668631-merged.mount: Deactivated successfully.
Jan 31 06:23:33 compute-0 podman[247764]: 2026-01-31 06:23:33.510244612 +0000 UTC m=+0.455848107 container remove 6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_brown, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 06:23:33 compute-0 systemd[1]: libpod-conmon-6175a3ceef59e4593c5427f4c4f5f8c11449772386e0a92e21ed59ac2e07329c.scope: Deactivated successfully.
Jan 31 06:23:33 compute-0 podman[247806]: 2026-01-31 06:23:33.628343358 +0000 UTC m=+0.035725600 container create 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:23:33 compute-0 systemd[1]: Started libpod-conmon-2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37.scope.
Jan 31 06:23:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0783712dee8e3b0b554f556ed970a7fd2c1088c089996cf94b26f1c0cc73aa3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0783712dee8e3b0b554f556ed970a7fd2c1088c089996cf94b26f1c0cc73aa3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0783712dee8e3b0b554f556ed970a7fd2c1088c089996cf94b26f1c0cc73aa3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0783712dee8e3b0b554f556ed970a7fd2c1088c089996cf94b26f1c0cc73aa3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:23:33 compute-0 podman[247806]: 2026-01-31 06:23:33.610064712 +0000 UTC m=+0.017446974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:23:33 compute-0 podman[247806]: 2026-01-31 06:23:33.725206074 +0000 UTC m=+0.132588336 container init 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 06:23:33 compute-0 podman[247806]: 2026-01-31 06:23:33.734094165 +0000 UTC m=+0.141476447 container start 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:23:33 compute-0 podman[247806]: 2026-01-31 06:23:33.739154608 +0000 UTC m=+0.146536890 container attach 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:23:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:34 compute-0 lvm[247901]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:23:34 compute-0 lvm[247903]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:23:34 compute-0 lvm[247900]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:23:34 compute-0 lvm[247903]: VG ceph_vg2 finished
Jan 31 06:23:34 compute-0 lvm[247900]: VG ceph_vg0 finished
Jan 31 06:23:34 compute-0 lvm[247901]: VG ceph_vg1 finished
Jan 31 06:23:34 compute-0 gracious_feistel[247822]: {}
Jan 31 06:23:34 compute-0 systemd[1]: libpod-2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37.scope: Deactivated successfully.
Jan 31 06:23:34 compute-0 podman[247806]: 2026-01-31 06:23:34.485188273 +0000 UTC m=+0.892570535 container died 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 06:23:34 compute-0 systemd[1]: libpod-2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37.scope: Consumed 1.173s CPU time.
Jan 31 06:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0783712dee8e3b0b554f556ed970a7fd2c1088c089996cf94b26f1c0cc73aa3e-merged.mount: Deactivated successfully.
Jan 31 06:23:34 compute-0 podman[247806]: 2026-01-31 06:23:34.520674755 +0000 UTC m=+0.928056997 container remove 2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:23:34 compute-0 systemd[1]: libpod-conmon-2f2fd45385642380be42b745bba4174b024087b43f8eea5a03f0d60303ef9f37.scope: Deactivated successfully.
Jan 31 06:23:34 compute-0 sudo[247726]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:23:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:23:34 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:34 compute-0 sudo[247918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:23:34 compute-0 sudo[247918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:23:34 compute-0 sudo[247918]: pam_unix(sudo:session): session closed for user root
Jan 31 06:23:35 compute-0 ceph-mon[75251]: pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:35 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:23:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:36 compute-0 ceph-mon[75251]: pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.234028) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617234060, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1290, "num_deletes": 251, "total_data_size": 2013883, "memory_usage": 2046000, "flush_reason": "Manual Compaction"}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617453878, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1983879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19808, "largest_seqno": 21097, "table_properties": {"data_size": 1977798, "index_size": 3348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12735, "raw_average_key_size": 19, "raw_value_size": 1965624, "raw_average_value_size": 3042, "num_data_blocks": 153, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840483, "oldest_key_time": 1769840483, "file_creation_time": 1769840617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 219920 microseconds, and 3873 cpu microseconds.
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.453940) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1983879 bytes OK
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.453963) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.538931) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.538964) EVENT_LOG_v1 {"time_micros": 1769840617538957, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.538985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2008082, prev total WAL file size 2008082, number of live WAL files 2.
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.539784) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1937KB)], [47(7513KB)]
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617539812, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9677305, "oldest_snapshot_seqno": -1}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4404 keys, 7912377 bytes, temperature: kUnknown
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617626906, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7912377, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7881600, "index_size": 18635, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 108879, "raw_average_key_size": 24, "raw_value_size": 7800616, "raw_average_value_size": 1771, "num_data_blocks": 780, "num_entries": 4404, "num_filter_entries": 4404, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.634936) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7912377 bytes
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.640674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.0 rd, 90.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.3 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.9) write-amplify(4.0) OK, records in: 4918, records dropped: 514 output_compression: NoCompression
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.640714) EVENT_LOG_v1 {"time_micros": 1769840617640698, "job": 24, "event": "compaction_finished", "compaction_time_micros": 87189, "compaction_time_cpu_micros": 13010, "output_level": 6, "num_output_files": 1, "total_output_size": 7912377, "num_input_records": 4918, "num_output_records": 4404, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617641033, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840617641803, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.539752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.641833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.641838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.641839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.641841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:23:37.641842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:23:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:38 compute-0 ceph-mon[75251]: pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:41 compute-0 ceph-mon[75251]: pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:41 compute-0 nova_compute[239679]: 2026-01-31 06:23:41.232 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:41 compute-0 nova_compute[239679]: 2026-01-31 06:23:41.233 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:42 compute-0 ceph-mon[75251]: pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:23:44
Jan 31 06:23:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:23:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:23:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups']
Jan 31 06:23:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:23:45 compute-0 ceph-mon[75251]: pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:23:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:46 compute-0 ceph-mon[75251]: pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:48 compute-0 nova_compute[239679]: 2026-01-31 06:23:48.092 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:48 compute-0 nova_compute[239679]: 2026-01-31 06:23:48.093 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:23:48 compute-0 nova_compute[239679]: 2026-01-31 06:23:48.093 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:23:48 compute-0 ceph-mon[75251]: pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:23:50.219 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:23:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:23:50.219 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:23:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:23:50.219 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:23:51 compute-0 ceph-mon[75251]: pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:52 compute-0 ceph-mon[75251]: pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:23:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683507994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:23:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:23:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/683507994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:23:54 compute-0 ceph-mon[75251]: pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/683507994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:23:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/683507994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.053 239684 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.23 sec
Jan 31 06:23:55 compute-0 podman[247944]: 2026-01-31 06:23:55.123792184 +0000 UTC m=+0.047878264 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 06:23:55 compute-0 podman[247943]: 2026-01-31 06:23:55.140886297 +0000 UTC m=+0.064476653 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.827 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.828 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.828 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.828 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.829 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.829 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 nova_compute[239679]: 2026-01-31 06:23:55.829 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:23:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:23:57 compute-0 ceph-mon[75251]: pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:23:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:58 compute-0 ceph-mon[75251]: pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:23:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:00 compute-0 nova_compute[239679]: 2026-01-31 06:24:00.548 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:24:00 compute-0 nova_compute[239679]: 2026-01-31 06:24:00.548 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:24:00 compute-0 nova_compute[239679]: 2026-01-31 06:24:00.549 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:24:01 compute-0 ceph-mon[75251]: pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:02 compute-0 ceph-mon[75251]: pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:05 compute-0 ceph-mon[75251]: pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:05 compute-0 nova_compute[239679]: 2026-01-31 06:24:05.746 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:24:05 compute-0 nova_compute[239679]: 2026-01-31 06:24:05.746 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:24:05 compute-0 nova_compute[239679]: 2026-01-31 06:24:05.747 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:24:05 compute-0 nova_compute[239679]: 2026-01-31 06:24:05.747 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:24:05 compute-0 nova_compute[239679]: 2026-01-31 06:24:05.747 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:24:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:06 compute-0 ceph-mon[75251]: pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:24:06 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218127092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:24:06 compute-0 nova_compute[239679]: 2026-01-31 06:24:06.291 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:24:06 compute-0 nova_compute[239679]: 2026-01-31 06:24:06.417 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:24:06 compute-0 nova_compute[239679]: 2026-01-31 06:24:06.418 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:24:06 compute-0 nova_compute[239679]: 2026-01-31 06:24:06.419 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:24:06 compute-0 nova_compute[239679]: 2026-01-31 06:24:06.419 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:24:07 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4218127092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:24:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:08 compute-0 ceph-mon[75251]: pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:11 compute-0 ceph-mon[75251]: pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:12 compute-0 ceph-mon[75251]: pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.348 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.348 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.435 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing inventories for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.541 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating ProviderTree inventory for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.542 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating inventory in ProviderTree for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.557 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing aggregate associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.575 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing trait associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 06:24:14 compute-0 nova_compute[239679]: 2026-01-31 06:24:14.589 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:24:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:24:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591436556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:24:15 compute-0 nova_compute[239679]: 2026-01-31 06:24:15.167 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:24:15 compute-0 nova_compute[239679]: 2026-01-31 06:24:15.172 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:24:15 compute-0 ceph-mon[75251]: pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3591436556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:24:16 compute-0 ceph-mon[75251]: pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:17 compute-0 nova_compute[239679]: 2026-01-31 06:24:17.092 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:24:17 compute-0 nova_compute[239679]: 2026-01-31 06:24:17.094 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:24:17 compute-0 nova_compute[239679]: 2026-01-31 06:24:17.094 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 10.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:24:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:19 compute-0 ceph-mon[75251]: pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:20 compute-0 ceph-mon[75251]: pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:23 compute-0 ceph-mon[75251]: pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:24 compute-0 ceph-mon[75251]: pgmap v1043: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:26 compute-0 podman[248031]: 2026-01-31 06:24:26.122214992 +0000 UTC m=+0.047926905 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 06:24:26 compute-0 podman[248030]: 2026-01-31 06:24:26.138885043 +0000 UTC m=+0.065998955 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:24:27 compute-0 ceph-mon[75251]: pgmap v1044: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:28 compute-0 ceph-mon[75251]: pgmap v1045: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:31 compute-0 ceph-mon[75251]: pgmap v1046: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:32 compute-0 ceph-mon[75251]: pgmap v1047: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:34 compute-0 sudo[248076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:24:34 compute-0 sudo[248076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:34 compute-0 sudo[248076]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:34 compute-0 sudo[248101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:24:34 compute-0 sudo[248101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:35 compute-0 sudo[248101]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:35 compute-0 sudo[248157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:24:35 compute-0 sudo[248157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:35 compute-0 sudo[248157]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:35 compute-0 ceph-mon[75251]: pgmap v1048: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:35 compute-0 sudo[248182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 06:24:35 compute-0 sudo[248182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:35 compute-0 sudo[248182]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:24:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:24:35 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:24:35 compute-0 sudo[248225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:24:35 compute-0 sudo[248225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:35 compute-0 sudo[248225]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:35 compute-0 sudo[248250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:24:35 compute-0 sudo[248250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:36 compute-0 podman[248287]: 2026-01-31 06:24:36.22759692 +0000 UTC m=+0.019862502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:36 compute-0 podman[248287]: 2026-01-31 06:24:36.609044445 +0000 UTC m=+0.401310037 container create c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:24:36 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:24:36 compute-0 ceph-mon[75251]: pgmap v1049: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:36 compute-0 systemd[1]: Started libpod-conmon-c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a.scope.
Jan 31 06:24:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:37 compute-0 podman[248287]: 2026-01-31 06:24:37.04001925 +0000 UTC m=+0.832284832 container init c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 06:24:37 compute-0 podman[248287]: 2026-01-31 06:24:37.049313352 +0000 UTC m=+0.841578924 container start c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 06:24:37 compute-0 systemd[1]: libpod-c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a.scope: Deactivated successfully.
Jan 31 06:24:37 compute-0 focused_morse[248303]: 167 167
Jan 31 06:24:37 compute-0 conmon[248303]: conmon c0fec7abb1170a52d840 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a.scope/container/memory.events
Jan 31 06:24:37 compute-0 podman[248287]: 2026-01-31 06:24:37.154931186 +0000 UTC m=+0.947196798 container attach c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:24:37 compute-0 podman[248287]: 2026-01-31 06:24:37.156266223 +0000 UTC m=+0.948531795 container died c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-75fecfa9710986a285d838f20543bc0d568ef8cfb451bae7cd1520430a0f7a5a-merged.mount: Deactivated successfully.
Jan 31 06:24:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:38 compute-0 podman[248287]: 2026-01-31 06:24:38.031044272 +0000 UTC m=+1.823309844 container remove c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:24:38 compute-0 systemd[1]: libpod-conmon-c0fec7abb1170a52d8408d9c89be1060828f8a167f7c159d933c7dcad1b8b02a.scope: Deactivated successfully.
Jan 31 06:24:38 compute-0 ceph-mon[75251]: pgmap v1050: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:38 compute-0 podman[248327]: 2026-01-31 06:24:38.160079058 +0000 UTC m=+0.021692234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:38 compute-0 podman[248327]: 2026-01-31 06:24:38.387679968 +0000 UTC m=+0.249293164 container create 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 06:24:38 compute-0 systemd[1]: Started libpod-conmon-35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494.scope.
Jan 31 06:24:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:38 compute-0 podman[248327]: 2026-01-31 06:24:38.622321276 +0000 UTC m=+0.483934522 container init 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:24:38 compute-0 podman[248327]: 2026-01-31 06:24:38.628919273 +0000 UTC m=+0.490532469 container start 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 06:24:38 compute-0 podman[248327]: 2026-01-31 06:24:38.675635242 +0000 UTC m=+0.537248398 container attach 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:24:39 compute-0 elated_burnell[248344]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:24:39 compute-0 elated_burnell[248344]: --> All data devices are unavailable
Jan 31 06:24:39 compute-0 systemd[1]: libpod-35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494.scope: Deactivated successfully.
Jan 31 06:24:39 compute-0 podman[248327]: 2026-01-31 06:24:39.061268926 +0000 UTC m=+0.922882082 container died 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 06:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba85986063b4db44307a13a8f7ff79497f442de4474c3d54b0f27b98e1908f03-merged.mount: Deactivated successfully.
Jan 31 06:24:39 compute-0 podman[248327]: 2026-01-31 06:24:39.311947357 +0000 UTC m=+1.173560523 container remove 35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_burnell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:24:39 compute-0 systemd[1]: libpod-conmon-35d943511793d4ea57d8aee6ea0320477ba9bb7a3923be3903344953fd2ed494.scope: Deactivated successfully.
Jan 31 06:24:39 compute-0 sudo[248250]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:39 compute-0 sudo[248376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:24:39 compute-0 sudo[248376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:39 compute-0 sudo[248376]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:39 compute-0 sudo[248401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:24:39 compute-0 sudo[248401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:24:39 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 6064 writes, 25K keys, 6064 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6064 writes, 1091 syncs, 5.56 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:24:39 compute-0 podman[248438]: 2026-01-31 06:24:39.762725141 +0000 UTC m=+0.107019244 container create 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 06:24:39 compute-0 podman[248438]: 2026-01-31 06:24:39.674584331 +0000 UTC m=+0.018878464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:39 compute-0 systemd[1]: Started libpod-conmon-621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4.scope.
Jan 31 06:24:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:39 compute-0 podman[248438]: 2026-01-31 06:24:39.918719637 +0000 UTC m=+0.263013810 container init 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:24:39 compute-0 podman[248438]: 2026-01-31 06:24:39.923093951 +0000 UTC m=+0.267388094 container start 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 06:24:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:39 compute-0 lucid_nightingale[248455]: 167 167
Jan 31 06:24:39 compute-0 systemd[1]: libpod-621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4.scope: Deactivated successfully.
Jan 31 06:24:40 compute-0 podman[248438]: 2026-01-31 06:24:40.019562516 +0000 UTC m=+0.363856649 container attach 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:24:40 compute-0 podman[248438]: 2026-01-31 06:24:40.019938027 +0000 UTC m=+0.364232130 container died 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:24:40 compute-0 ceph-mon[75251]: pgmap v1051: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6e916e2324f8d99f496e19c66e76e1a3529804e448e43ef077e69ccce2b053-merged.mount: Deactivated successfully.
Jan 31 06:24:40 compute-0 podman[248438]: 2026-01-31 06:24:40.51779897 +0000 UTC m=+0.862093083 container remove 621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 06:24:40 compute-0 systemd[1]: libpod-conmon-621f096605bfad7b888f3d33544221c7a016d61ab832f782168885f8fd9975b4.scope: Deactivated successfully.
Jan 31 06:24:40 compute-0 podman[248481]: 2026-01-31 06:24:40.699912775 +0000 UTC m=+0.092209146 container create a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:24:40 compute-0 podman[248481]: 2026-01-31 06:24:40.641195576 +0000 UTC m=+0.033491927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:40 compute-0 systemd[1]: Started libpod-conmon-a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32.scope.
Jan 31 06:24:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8cc2b39a48793ad483b22c0739c8b906d1bc865518eb1a9a2a1c0c845d4249/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8cc2b39a48793ad483b22c0739c8b906d1bc865518eb1a9a2a1c0c845d4249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8cc2b39a48793ad483b22c0739c8b906d1bc865518eb1a9a2a1c0c845d4249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8cc2b39a48793ad483b22c0739c8b906d1bc865518eb1a9a2a1c0c845d4249/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:40 compute-0 podman[248481]: 2026-01-31 06:24:40.867357645 +0000 UTC m=+0.259654066 container init a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 06:24:40 compute-0 podman[248481]: 2026-01-31 06:24:40.875654389 +0000 UTC m=+0.267950720 container start a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:24:40 compute-0 podman[248481]: 2026-01-31 06:24:40.910582416 +0000 UTC m=+0.302878767 container attach a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:24:41 compute-0 stoic_dirac[248497]: {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     "0": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "devices": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "/dev/loop3"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             ],
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_name": "ceph_lv0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_size": "21470642176",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "name": "ceph_lv0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "tags": {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_name": "ceph",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.crush_device_class": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.encrypted": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.objectstore": "bluestore",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_id": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.vdo": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.with_tpm": "0"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             },
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "vg_name": "ceph_vg0"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         }
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     ],
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     "1": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "devices": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "/dev/loop4"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             ],
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_name": "ceph_lv1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_size": "21470642176",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "name": "ceph_lv1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "tags": {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_name": "ceph",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.crush_device_class": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.encrypted": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.objectstore": "bluestore",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_id": "1",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.vdo": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.with_tpm": "0"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             },
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "vg_name": "ceph_vg1"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         }
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     ],
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     "2": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "devices": [
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "/dev/loop5"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             ],
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_name": "ceph_lv2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_size": "21470642176",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "name": "ceph_lv2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "tags": {
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.cluster_name": "ceph",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.crush_device_class": "",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.encrypted": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.objectstore": "bluestore",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osd_id": "2",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.vdo": "0",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:                 "ceph.with_tpm": "0"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             },
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "type": "block",
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:             "vg_name": "ceph_vg2"
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:         }
Jan 31 06:24:41 compute-0 stoic_dirac[248497]:     ]
Jan 31 06:24:41 compute-0 stoic_dirac[248497]: }
Jan 31 06:24:41 compute-0 systemd[1]: libpod-a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32.scope: Deactivated successfully.
Jan 31 06:24:41 compute-0 podman[248481]: 2026-01-31 06:24:41.187343914 +0000 UTC m=+0.579640255 container died a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 06:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e8cc2b39a48793ad483b22c0739c8b906d1bc865518eb1a9a2a1c0c845d4249-merged.mount: Deactivated successfully.
Jan 31 06:24:41 compute-0 podman[248481]: 2026-01-31 06:24:41.259519473 +0000 UTC m=+0.651815814 container remove a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_dirac, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:24:41 compute-0 systemd[1]: libpod-conmon-a92269337c186e67fb0ce1bca348400526c1322b12f1dad455166149f0f98b32.scope: Deactivated successfully.
Jan 31 06:24:41 compute-0 sudo[248401]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:41 compute-0 sudo[248516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:24:41 compute-0 sudo[248516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:41 compute-0 sudo[248516]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:41 compute-0 sudo[248541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:24:41 compute-0 sudo[248541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.656086244 +0000 UTC m=+0.035750901 container create 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:24:41 compute-0 systemd[1]: Started libpod-conmon-417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe.scope.
Jan 31 06:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.723174309 +0000 UTC m=+0.102838986 container init 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.728552921 +0000 UTC m=+0.108217578 container start 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:24:41 compute-0 sleepy_kepler[248593]: 167 167
Jan 31 06:24:41 compute-0 systemd[1]: libpod-417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe.scope: Deactivated successfully.
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.732634396 +0000 UTC m=+0.112299073 container attach 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.732956666 +0000 UTC m=+0.112621313 container died 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.641912474 +0000 UTC m=+0.021577131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe265d986cfe505de4d26baffc0b2f3cb5bd213a3dcba24d397d6ebd7e7d65de-merged.mount: Deactivated successfully.
Jan 31 06:24:41 compute-0 podman[248577]: 2026-01-31 06:24:41.775035064 +0000 UTC m=+0.154699721 container remove 417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:24:41 compute-0 systemd[1]: libpod-conmon-417677065debc95d11c4ea35a8fdc2462e086d1115be7d7f415f4f97cb7d5fbe.scope: Deactivated successfully.
Jan 31 06:24:41 compute-0 podman[248617]: 2026-01-31 06:24:41.891195946 +0000 UTC m=+0.047298037 container create 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:24:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:41 compute-0 systemd[1]: Started libpod-conmon-2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c.scope.
Jan 31 06:24:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36bcf6ec230e1282b89fcf207587a59ec3c1f73d18a12bec202a38e3d0afeb02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36bcf6ec230e1282b89fcf207587a59ec3c1f73d18a12bec202a38e3d0afeb02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36bcf6ec230e1282b89fcf207587a59ec3c1f73d18a12bec202a38e3d0afeb02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36bcf6ec230e1282b89fcf207587a59ec3c1f73d18a12bec202a38e3d0afeb02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:24:41 compute-0 podman[248617]: 2026-01-31 06:24:41.864348967 +0000 UTC m=+0.020451118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:24:41 compute-0 podman[248617]: 2026-01-31 06:24:41.969861148 +0000 UTC m=+0.125963249 container init 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 06:24:41 compute-0 podman[248617]: 2026-01-31 06:24:41.974726895 +0000 UTC m=+0.130828976 container start 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:24:41 compute-0 podman[248617]: 2026-01-31 06:24:41.981571349 +0000 UTC m=+0.137673420 container attach 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:24:42 compute-0 lvm[248709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:24:42 compute-0 lvm[248709]: VG ceph_vg0 finished
Jan 31 06:24:42 compute-0 lvm[248712]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:24:42 compute-0 lvm[248712]: VG ceph_vg1 finished
Jan 31 06:24:42 compute-0 lvm[248714]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:24:42 compute-0 lvm[248714]: VG ceph_vg2 finished
Jan 31 06:24:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:42 compute-0 beautiful_robinson[248633]: {}
Jan 31 06:24:42 compute-0 systemd[1]: libpod-2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c.scope: Deactivated successfully.
Jan 31 06:24:42 compute-0 podman[248617]: 2026-01-31 06:24:42.673427662 +0000 UTC m=+0.829529753 container died 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-36bcf6ec230e1282b89fcf207587a59ec3c1f73d18a12bec202a38e3d0afeb02-merged.mount: Deactivated successfully.
Jan 31 06:24:42 compute-0 podman[248617]: 2026-01-31 06:24:42.715011957 +0000 UTC m=+0.871114028 container remove 2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:24:42 compute-0 systemd[1]: libpod-conmon-2b8aeeaf4cad991be706e2bb7bea4ee6c34be2596e7312dc28d635e3ad988d3c.scope: Deactivated successfully.
Jan 31 06:24:42 compute-0 sudo[248541]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:24:42 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:24:42 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:42 compute-0 sudo[248731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:24:42 compute-0 sudo[248731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:24:42 compute-0 sudo[248731]: pam_unix(sudo:session): session closed for user root
Jan 31 06:24:42 compute-0 ceph-mon[75251]: pgmap v1052: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:42 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:24:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:24:43 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 8716 writes, 35K keys, 8716 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8716 writes, 1835 syncs, 4.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:24:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:24:44
Jan 31 06:24:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:24:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:24:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'images']
Jan 31 06:24:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:24:45 compute-0 ceph-mon[75251]: pgmap v1053: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:24:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:46 compute-0 ceph-mon[75251]: pgmap v1054: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:48 compute-0 ceph-mon[75251]: pgmap v1055: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:24:50.220 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:24:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:24:50.221 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:24:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:24:50.221 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:24:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:24:50 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 5933 writes, 25K keys, 5933 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5933 writes, 990 syncs, 5.99 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:24:50 compute-0 ceph-mon[75251]: pgmap v1056: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:52 compute-0 ceph-mon[75251]: pgmap v1057: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:24:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699787604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:24:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:24:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699787604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:24:54 compute-0 ceph-mgr[75550]: [devicehealth INFO root] Check health
Jan 31 06:24:55 compute-0 ceph-mon[75251]: pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/1699787604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:24:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/1699787604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:24:55 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:24:56 compute-0 ceph-mon[75251]: pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:57 compute-0 podman[248757]: 2026-01-31 06:24:57.165429255 +0000 UTC m=+0.081530654 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:24:57 compute-0 podman[248756]: 2026-01-31 06:24:57.171216658 +0000 UTC m=+0.088632804 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 06:24:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:24:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:58 compute-0 ceph-mon[75251]: pgmap v1060: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:24:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:00 compute-0 ceph-mon[75251]: pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:02 compute-0 ceph-mon[75251]: pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:04 compute-0 ceph-mon[75251]: pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:07 compute-0 ceph-mon[75251]: pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:09 compute-0 ceph-mon[75251]: pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:11 compute-0 ceph-mon[75251]: pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:13 compute-0 ceph-mon[75251]: pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:15 compute-0 ceph-mon[75251]: pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:16 compute-0 ceph-mon[75251]: pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:17 compute-0 nova_compute[239679]: 2026-01-31 06:25:17.096 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:17 compute-0 nova_compute[239679]: 2026-01-31 06:25:17.097 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:19 compute-0 ceph-mon[75251]: pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:20 compute-0 nova_compute[239679]: 2026-01-31 06:25:20.858 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:20 compute-0 nova_compute[239679]: 2026-01-31 06:25:20.858 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:25:20 compute-0 nova_compute[239679]: 2026-01-31 06:25:20.859 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:25:21 compute-0 ceph-mon[75251]: pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:22 compute-0 ceph-mon[75251]: pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:25 compute-0 ceph-mon[75251]: pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.379 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.380 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.380 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.380 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.381 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.381 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.381 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.381 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:25:26 compute-0 nova_compute[239679]: 2026-01-31 06:25:26.381 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:25:27 compute-0 ceph-mon[75251]: pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:28 compute-0 podman[248804]: 2026-01-31 06:25:28.11567705 +0000 UTC m=+0.039010453 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 06:25:28 compute-0 podman[248803]: 2026-01-31 06:25:28.136804197 +0000 UTC m=+0.061088827 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:25:28 compute-0 nova_compute[239679]: 2026-01-31 06:25:28.933 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:25:28 compute-0 nova_compute[239679]: 2026-01-31 06:25:28.933 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:25:28 compute-0 nova_compute[239679]: 2026-01-31 06:25:28.934 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:25:28 compute-0 nova_compute[239679]: 2026-01-31 06:25:28.934 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:25:28 compute-0 nova_compute[239679]: 2026-01-31 06:25:28.934 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:25:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:25:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/764557628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:25:29 compute-0 nova_compute[239679]: 2026-01-31 06:25:29.490 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:25:29 compute-0 ceph-mon[75251]: pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:29 compute-0 nova_compute[239679]: 2026-01-31 06:25:29.603 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:25:29 compute-0 nova_compute[239679]: 2026-01-31 06:25:29.604 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:25:29 compute-0 nova_compute[239679]: 2026-01-31 06:25:29.605 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:25:29 compute-0 nova_compute[239679]: 2026-01-31 06:25:29.605 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:25:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:30 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/764557628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:25:30 compute-0 ceph-mon[75251]: pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:30 compute-0 nova_compute[239679]: 2026-01-31 06:25:30.743 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:25:30 compute-0 nova_compute[239679]: 2026-01-31 06:25:30.743 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:25:30 compute-0 nova_compute[239679]: 2026-01-31 06:25:30.757 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:25:31 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:25:31 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149825749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:25:31 compute-0 nova_compute[239679]: 2026-01-31 06:25:31.239 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:25:31 compute-0 nova_compute[239679]: 2026-01-31 06:25:31.243 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:25:31 compute-0 nova_compute[239679]: 2026-01-31 06:25:31.325 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:25:31 compute-0 nova_compute[239679]: 2026-01-31 06:25:31.327 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:25:31 compute-0 nova_compute[239679]: 2026-01-31 06:25:31.327 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:25:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:32 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3149825749' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:25:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:33 compute-0 ceph-mon[75251]: pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:35 compute-0 ceph-mon[75251]: pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:36 compute-0 ceph-mon[75251]: pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:39 compute-0 ceph-mon[75251]: pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:41 compute-0 ceph-mon[75251]: pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:42 compute-0 sudo[248890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:42 compute-0 sudo[248890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:42 compute-0 sudo[248890]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:42 compute-0 sudo[248915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 06:25:42 compute-0 sudo[248915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:43 compute-0 sudo[248915]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:25:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:25:43 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:43 compute-0 ceph-mon[75251]: pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:43 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:43 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:43 compute-0 sudo[248961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:43 compute-0 sudo[248961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:43 compute-0 sudo[248961]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:43 compute-0 sudo[248986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:25:43 compute-0 sudo[248986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:43 compute-0 sudo[248986]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:43 compute-0 sudo[249041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:43 compute-0 sudo[249041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:43 compute-0 sudo[249041]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:43 compute-0 sudo[249066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- inventory --format=json-pretty --filter-for-batch
Jan 31 06:25:43 compute-0 sudo[249066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:43.986247294 +0000 UTC m=+0.017621469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.104509014 +0000 UTC m=+0.135883209 container create 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 06:25:44 compute-0 systemd[1]: Started libpod-conmon-5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280.scope.
Jan 31 06:25:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:25:44
Jan 31 06:25:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:25:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:25:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 31 06:25:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.528728078 +0000 UTC m=+0.560102323 container init 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.539647047 +0000 UTC m=+0.571021242 container start 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:25:44 compute-0 loving_lamport[249121]: 167 167
Jan 31 06:25:44 compute-0 systemd[1]: libpod-5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280.scope: Deactivated successfully.
Jan 31 06:25:44 compute-0 conmon[249121]: conmon 5c9088cf0a8f23bb0083 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280.scope/container/memory.events
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.618821553 +0000 UTC m=+0.650195708 container attach 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.620296295 +0000 UTC m=+0.651670450 container died 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:25:44 compute-0 ceph-mon[75251]: pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-db34b7f89db8a204bbd392431407295e101fcf2ecd0fae48e03e727939b06aee-merged.mount: Deactivated successfully.
Jan 31 06:25:44 compute-0 podman[249104]: 2026-01-31 06:25:44.686377872 +0000 UTC m=+0.717752027 container remove 5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:25:44 compute-0 systemd[1]: libpod-conmon-5c9088cf0a8f23bb00831b95433aaf0b33fe26f3c5ecd8dd07daca358458d280.scope: Deactivated successfully.
Jan 31 06:25:44 compute-0 podman[249148]: 2026-01-31 06:25:44.804480378 +0000 UTC m=+0.035636398 container create b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:25:44 compute-0 systemd[1]: Started libpod-conmon-b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28.scope.
Jan 31 06:25:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4182d80f61a18c5a8ce5b534b3b68cf1662f5c8a2c54794ac861a51b9d1a916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4182d80f61a18c5a8ce5b534b3b68cf1662f5c8a2c54794ac861a51b9d1a916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4182d80f61a18c5a8ce5b534b3b68cf1662f5c8a2c54794ac861a51b9d1a916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4182d80f61a18c5a8ce5b534b3b68cf1662f5c8a2c54794ac861a51b9d1a916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:44 compute-0 podman[249148]: 2026-01-31 06:25:44.885344892 +0000 UTC m=+0.116500972 container init b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:25:44 compute-0 podman[249148]: 2026-01-31 06:25:44.790904834 +0000 UTC m=+0.022060884 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:44 compute-0 podman[249148]: 2026-01-31 06:25:44.893384239 +0000 UTC m=+0.124540269 container start b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 06:25:44 compute-0 podman[249148]: 2026-01-31 06:25:44.897009252 +0000 UTC m=+0.128165282 container attach b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 06:25:45 compute-0 sshd-session[249142]: Invalid user sol from 45.148.10.240 port 48842
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:45 compute-0 busy_bassi[249166]: [
Jan 31 06:25:45 compute-0 busy_bassi[249166]:     {
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "available": false,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "being_replaced": false,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "ceph_device_lvm": false,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "lsm_data": {},
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "lvs": [],
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "path": "/dev/sr0",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "rejected_reasons": [
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "Has a FileSystem",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "Insufficient space (<5GB)"
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         ],
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         "sys_api": {
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "actuators": null,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "device_nodes": [
Jan 31 06:25:45 compute-0 busy_bassi[249166]:                 "sr0"
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             ],
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "devname": "sr0",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "human_readable_size": "482.00 KB",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "id_bus": "ata",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "model": "QEMU DVD-ROM",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "nr_requests": "2",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "parent": "/dev/sr0",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "partitions": {},
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "path": "/dev/sr0",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "removable": "1",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "rev": "2.5+",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "ro": "0",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "rotational": "1",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "sas_address": "",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "sas_device_handle": "",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "scheduler_mode": "mq-deadline",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "sectors": 0,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "sectorsize": "2048",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "size": 493568.0,
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "support_discard": "2048",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "type": "disk",
Jan 31 06:25:45 compute-0 busy_bassi[249166]:             "vendor": "QEMU"
Jan 31 06:25:45 compute-0 busy_bassi[249166]:         }
Jan 31 06:25:45 compute-0 busy_bassi[249166]:     }
Jan 31 06:25:45 compute-0 busy_bassi[249166]: ]
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:25:45 compute-0 systemd[1]: libpod-b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28.scope: Deactivated successfully.
Jan 31 06:25:45 compute-0 podman[249148]: 2026-01-31 06:25:45.389844903 +0000 UTC m=+0.621000963 container died b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:25:45 compute-0 sshd-session[249142]: Connection closed by invalid user sol 45.148.10.240 port 48842 [preauth]
Jan 31 06:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4182d80f61a18c5a8ce5b534b3b68cf1662f5c8a2c54794ac861a51b9d1a916-merged.mount: Deactivated successfully.
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:25:45 compute-0 podman[249148]: 2026-01-31 06:25:45.540081447 +0000 UTC m=+0.771237467 container remove b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_bassi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:25:45 compute-0 systemd[1]: libpod-conmon-b53ca227ec5b665184945b8421ae1cd07caa72b52d52ab8e0023713a8b898d28.scope: Deactivated successfully.
Jan 31 06:25:45 compute-0 sudo[249066]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:25:45 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:25:45 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:25:45 compute-0 sudo[249896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:45 compute-0 sudo[249896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:45 compute-0 sudo[249896]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:45 compute-0 sudo[249921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:25:45 compute-0 sudo[249921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.127685925 +0000 UTC m=+0.115093612 container create 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.03580784 +0000 UTC m=+0.023215547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:46 compute-0 systemd[1]: Started libpod-conmon-21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995.scope.
Jan 31 06:25:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.517872277 +0000 UTC m=+0.505279994 container init 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.526643975 +0000 UTC m=+0.514051662 container start 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:25:46 compute-0 affectionate_ramanujan[249974]: 167 167
Jan 31 06:25:46 compute-0 systemd[1]: libpod-21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995.scope: Deactivated successfully.
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.750329224 +0000 UTC m=+0.737736911 container attach 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 06:25:46 compute-0 podman[249958]: 2026-01-31 06:25:46.750886519 +0000 UTC m=+0.738294206 container died 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:25:46 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:25:46 compute-0 ceph-mon[75251]: pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa3a9d6027c3ca505977c323ae32f5c40c67c656cf7c3681a130cab808e1dae-merged.mount: Deactivated successfully.
Jan 31 06:25:47 compute-0 podman[249958]: 2026-01-31 06:25:47.753702537 +0000 UTC m=+1.741110224 container remove 21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ramanujan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:25:47 compute-0 systemd[1]: libpod-conmon-21fc36601e3b42a0e0e3d11a8c2f72dc4c548120b25999626308db76aad30995.scope: Deactivated successfully.
Jan 31 06:25:47 compute-0 podman[249997]: 2026-01-31 06:25:47.879254024 +0000 UTC m=+0.049150439 container create bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:25:47 compute-0 systemd[1]: Started libpod-conmon-bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab.scope.
Jan 31 06:25:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:47 compute-0 podman[249997]: 2026-01-31 06:25:47.851363066 +0000 UTC m=+0.021259521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:47 compute-0 podman[249997]: 2026-01-31 06:25:47.962877136 +0000 UTC m=+0.132773561 container init bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:25:47 compute-0 podman[249997]: 2026-01-31 06:25:47.97008237 +0000 UTC m=+0.139978775 container start bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:25:47 compute-0 podman[249997]: 2026-01-31 06:25:47.973101745 +0000 UTC m=+0.142998160 container attach bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 06:25:48 compute-0 quirky_dewdney[250014]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:25:48 compute-0 quirky_dewdney[250014]: --> All data devices are unavailable
Jan 31 06:25:48 compute-0 systemd[1]: libpod-bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab.scope: Deactivated successfully.
Jan 31 06:25:48 compute-0 conmon[250014]: conmon bf5b8a33f57b16774b0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab.scope/container/memory.events
Jan 31 06:25:48 compute-0 podman[249997]: 2026-01-31 06:25:48.344288941 +0000 UTC m=+0.514185356 container died bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-661ab82b0a9ac77b1faecd61ff7f06a24f5de3c74a3fe3976a9e4cfb8ff13ace-merged.mount: Deactivated successfully.
Jan 31 06:25:48 compute-0 podman[249997]: 2026-01-31 06:25:48.386959406 +0000 UTC m=+0.556855821 container remove bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_dewdney, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 06:25:48 compute-0 systemd[1]: libpod-conmon-bf5b8a33f57b16774b0f36d08ce95b0b47a10122c994a3b8e8a0b2f9fa3630ab.scope: Deactivated successfully.
Jan 31 06:25:48 compute-0 sudo[249921]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:48 compute-0 sudo[250045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:48 compute-0 sudo[250045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:48 compute-0 sudo[250045]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:48 compute-0 ceph-mon[75251]: pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:48 compute-0 sudo[250070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:25:48 compute-0 sudo[250070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.766649282 +0000 UTC m=+0.031086800 container create 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 06:25:48 compute-0 systemd[1]: Started libpod-conmon-614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716.scope.
Jan 31 06:25:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.836376031 +0000 UTC m=+0.100813569 container init 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.841343302 +0000 UTC m=+0.105780820 container start 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:25:48 compute-0 elated_meninsky[250124]: 167 167
Jan 31 06:25:48 compute-0 systemd[1]: libpod-614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716.scope: Deactivated successfully.
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.845410317 +0000 UTC m=+0.109847855 container attach 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.845754766 +0000 UTC m=+0.110192284 container died 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.753671075 +0000 UTC m=+0.018108613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc9aa5045a0d7aff627082fde5ca954a842bcb59ebb4874a45e66ca834baa50-merged.mount: Deactivated successfully.
Jan 31 06:25:48 compute-0 podman[250107]: 2026-01-31 06:25:48.875543048 +0000 UTC m=+0.139980566 container remove 614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_meninsky, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:25:48 compute-0 systemd[1]: libpod-conmon-614307fbc26e58b21cebe1c4eb3c5cb280d40104f83bade9544b5d6d60e71716.scope: Deactivated successfully.
Jan 31 06:25:48 compute-0 podman[250148]: 2026-01-31 06:25:48.992253085 +0000 UTC m=+0.043215412 container create a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:25:49 compute-0 systemd[1]: Started libpod-conmon-a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0.scope.
Jan 31 06:25:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb90c9b46b9aa20c76c226a3f895ffdf3fd2a3ffe032b8897eb6b4265bcb708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb90c9b46b9aa20c76c226a3f895ffdf3fd2a3ffe032b8897eb6b4265bcb708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb90c9b46b9aa20c76c226a3f895ffdf3fd2a3ffe032b8897eb6b4265bcb708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfb90c9b46b9aa20c76c226a3f895ffdf3fd2a3ffe032b8897eb6b4265bcb708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:48.969754489 +0000 UTC m=+0.020716836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:49.071339289 +0000 UTC m=+0.122301646 container init a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:49.075579738 +0000 UTC m=+0.126542065 container start a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:49.079398256 +0000 UTC m=+0.130360693 container attach a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 06:25:49 compute-0 bold_greider[250164]: {
Jan 31 06:25:49 compute-0 bold_greider[250164]:     "0": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:         {
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "devices": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "/dev/loop3"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             ],
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_name": "ceph_lv0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_size": "21470642176",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "name": "ceph_lv0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "tags": {
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_name": "ceph",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.crush_device_class": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.encrypted": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.objectstore": "bluestore",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_id": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.vdo": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.with_tpm": "0"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             },
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "vg_name": "ceph_vg0"
Jan 31 06:25:49 compute-0 bold_greider[250164]:         }
Jan 31 06:25:49 compute-0 bold_greider[250164]:     ],
Jan 31 06:25:49 compute-0 bold_greider[250164]:     "1": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:         {
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "devices": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "/dev/loop4"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             ],
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_name": "ceph_lv1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_size": "21470642176",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "name": "ceph_lv1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "tags": {
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_name": "ceph",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.crush_device_class": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.encrypted": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.objectstore": "bluestore",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_id": "1",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.vdo": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.with_tpm": "0"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             },
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "vg_name": "ceph_vg1"
Jan 31 06:25:49 compute-0 bold_greider[250164]:         }
Jan 31 06:25:49 compute-0 bold_greider[250164]:     ],
Jan 31 06:25:49 compute-0 bold_greider[250164]:     "2": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:         {
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "devices": [
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "/dev/loop5"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             ],
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_name": "ceph_lv2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_size": "21470642176",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "name": "ceph_lv2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "tags": {
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.cluster_name": "ceph",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.crush_device_class": "",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.encrypted": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.objectstore": "bluestore",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osd_id": "2",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.vdo": "0",
Jan 31 06:25:49 compute-0 bold_greider[250164]:                 "ceph.with_tpm": "0"
Jan 31 06:25:49 compute-0 bold_greider[250164]:             },
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "type": "block",
Jan 31 06:25:49 compute-0 bold_greider[250164]:             "vg_name": "ceph_vg2"
Jan 31 06:25:49 compute-0 bold_greider[250164]:         }
Jan 31 06:25:49 compute-0 bold_greider[250164]:     ]
Jan 31 06:25:49 compute-0 bold_greider[250164]: }
Jan 31 06:25:49 compute-0 systemd[1]: libpod-a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0.scope: Deactivated successfully.
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:49.336561721 +0000 UTC m=+0.387524048 container died a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfb90c9b46b9aa20c76c226a3f895ffdf3fd2a3ffe032b8897eb6b4265bcb708-merged.mount: Deactivated successfully.
Jan 31 06:25:49 compute-0 podman[250148]: 2026-01-31 06:25:49.371697273 +0000 UTC m=+0.422659600 container remove a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_greider, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 06:25:49 compute-0 systemd[1]: libpod-conmon-a1fbcb146fd531e481d45dead2e7f11b111833f8959a79e1c34fa4b5dc5343c0.scope: Deactivated successfully.
Jan 31 06:25:49 compute-0 sudo[250070]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:49 compute-0 sudo[250186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:25:49 compute-0 sudo[250186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:49 compute-0 sudo[250186]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:49 compute-0 sudo[250211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:25:49 compute-0 sudo[250211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.737187837 +0000 UTC m=+0.030010159 container create e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:25:49 compute-0 systemd[1]: Started libpod-conmon-e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a.scope.
Jan 31 06:25:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.803589752 +0000 UTC m=+0.096412084 container init e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.810787146 +0000 UTC m=+0.103609458 container start e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:25:49 compute-0 eager_wiles[250264]: 167 167
Jan 31 06:25:49 compute-0 systemd[1]: libpod-e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a.scope: Deactivated successfully.
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.817879866 +0000 UTC m=+0.110702218 container attach e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.818341549 +0000 UTC m=+0.111163881 container died e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.724403946 +0000 UTC m=+0.017226288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4518d4427f4bae0d4b745ef45e402e6f2143c2fd173d6ef25f9576ba8d67275e-merged.mount: Deactivated successfully.
Jan 31 06:25:49 compute-0 podman[250248]: 2026-01-31 06:25:49.86119202 +0000 UTC m=+0.154014362 container remove e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 06:25:49 compute-0 systemd[1]: libpod-conmon-e0e7fabbe5cc5b4d003d908482c14414db0485eb1b49bbe6fc044453632f908a.scope: Deactivated successfully.
Jan 31 06:25:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:49 compute-0 podman[250288]: 2026-01-31 06:25:49.977164476 +0000 UTC m=+0.038232311 container create cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:25:50 compute-0 systemd[1]: Started libpod-conmon-cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed.scope.
Jan 31 06:25:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36548dd0b89dfdaea1ef0df40bb5475dab9f52b2f8a7ed26bdc840b601c5921/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36548dd0b89dfdaea1ef0df40bb5475dab9f52b2f8a7ed26bdc840b601c5921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36548dd0b89dfdaea1ef0df40bb5475dab9f52b2f8a7ed26bdc840b601c5921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36548dd0b89dfdaea1ef0df40bb5475dab9f52b2f8a7ed26bdc840b601c5921/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:50.046141734 +0000 UTC m=+0.107209599 container init cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:50.051837865 +0000 UTC m=+0.112905700 container start cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:50.054762178 +0000 UTC m=+0.115830033 container attach cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:49.95996588 +0000 UTC m=+0.021033745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:25:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:25:50.221 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:25:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:25:50.223 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:25:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:25:50.224 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:25:50 compute-0 lvm[250382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:25:50 compute-0 lvm[250385]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:25:50 compute-0 lvm[250385]: VG ceph_vg1 finished
Jan 31 06:25:50 compute-0 lvm[250382]: VG ceph_vg0 finished
Jan 31 06:25:50 compute-0 lvm[250387]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:25:50 compute-0 lvm[250387]: VG ceph_vg2 finished
Jan 31 06:25:50 compute-0 peaceful_lamarr[250306]: {}
Jan 31 06:25:50 compute-0 systemd[1]: libpod-cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed.scope: Deactivated successfully.
Jan 31 06:25:50 compute-0 systemd[1]: libpod-cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed.scope: Consumed 1.047s CPU time.
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:50.830592184 +0000 UTC m=+0.891660049 container died cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c36548dd0b89dfdaea1ef0df40bb5475dab9f52b2f8a7ed26bdc840b601c5921-merged.mount: Deactivated successfully.
Jan 31 06:25:50 compute-0 podman[250288]: 2026-01-31 06:25:50.867222148 +0000 UTC m=+0.928289983 container remove cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lamarr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 06:25:50 compute-0 systemd[1]: libpod-conmon-cc3192572646ced06f0b3f5bf0e3787d96da571267559ce1b9ac1f1cea9cefed.scope: Deactivated successfully.
Jan 31 06:25:50 compute-0 sudo[250211]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:25:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:50 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:25:50 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:50 compute-0 sudo[250403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:25:50 compute-0 sudo[250403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:25:50 compute-0 sudo[250403]: pam_unix(sudo:session): session closed for user root
Jan 31 06:25:51 compute-0 ceph-mon[75251]: pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:51 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:25:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:53 compute-0 ceph-mon[75251]: pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:25:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3257578668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:25:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:25:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3257578668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:25:55 compute-0 ceph-mon[75251]: pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/3257578668' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:25:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/3257578668' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:25:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:25:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:25:57 compute-0 ceph-mon[75251]: pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:25:59 compute-0 podman[250429]: 2026-01-31 06:25:59.130975084 +0000 UTC m=+0.050099447 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:25:59 compute-0 ceph-mon[75251]: pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:25:59 compute-0 podman[250428]: 2026-01-31 06:25:59.157994857 +0000 UTC m=+0.079400884 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:25:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:01 compute-0 ceph-mon[75251]: pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:03 compute-0 ceph-mon[75251]: pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:26:05 compute-0 ceph-mon[75251]: pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:26:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:26:06 compute-0 ceph-mon[75251]: pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 31 06:26:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 06:26:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:09 compute-0 ceph-mon[75251]: pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 06:26:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:11 compute-0 ceph-mon[75251]: pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:13 compute-0 ceph-mon[75251]: pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:15 compute-0 ceph-mon[75251]: pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 06:26:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 06:26:16 compute-0 ceph-mon[75251]: pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 06:26:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 06:26:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:19 compute-0 ceph-mon[75251]: pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 06:26:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 31 06:26:21 compute-0 ceph-mon[75251]: pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 31 06:26:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:23 compute-0 ceph-mon[75251]: pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:25 compute-0 ceph-mon[75251]: pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:27 compute-0 ceph-mon[75251]: pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:29 compute-0 ceph-mon[75251]: pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:30 compute-0 podman[250471]: 2026-01-31 06:26:30.167766754 +0000 UTC m=+0.076770889 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:26:30 compute-0 podman[250472]: 2026-01-31 06:26:30.168246088 +0000 UTC m=+0.077650255 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.329 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.329 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.329 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.329 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:26:31 compute-0 ceph-mon[75251]: pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.668 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.669 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.670 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.671 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.829 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.829 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.830 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.830 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:26:31 compute-0 nova_compute[239679]: 2026-01-31 06:26:31.830 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:26:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:26:32 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597751051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.325 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:26:32 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2597751051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.435 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.436 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.437 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.437 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.837 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.838 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:26:32 compute-0 nova_compute[239679]: 2026-01-31 06:26:32.854 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:26:32 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:26:33 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1806144412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:26:33 compute-0 nova_compute[239679]: 2026-01-31 06:26:33.330 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:26:33 compute-0 nova_compute[239679]: 2026-01-31 06:26:33.334 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:26:33 compute-0 ceph-mon[75251]: pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:33 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1806144412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:26:33 compute-0 nova_compute[239679]: 2026-01-31 06:26:33.546 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:26:33 compute-0 nova_compute[239679]: 2026-01-31 06:26:33.547 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:26:33 compute-0 nova_compute[239679]: 2026-01-31 06:26:33.548 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:26:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:34 compute-0 ceph-mon[75251]: pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:37 compute-0 ceph-mon[75251]: pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:39 compute-0 ceph-mon[75251]: pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:41 compute-0 ceph-mon[75251]: pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:42 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:43 compute-0 ceph-mon[75251]: pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:44 compute-0 ceph-mon[75251]: pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:26:44
Jan 31 06:26:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:26:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:26:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 31 06:26:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:26:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:47 compute-0 ceph-mon[75251]: pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:47 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:48 compute-0 ceph-mon[75251]: pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:26:50.222 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:26:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:26:50.223 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:26:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:26:50.223 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:26:51 compute-0 sudo[250557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:26:51 compute-0 sudo[250557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:51 compute-0 sudo[250557]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:51 compute-0 sudo[250582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:26:51 compute-0 sudo[250582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:51 compute-0 sudo[250582]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:26:51 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:26:51 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:26:51 compute-0 sudo[250638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:26:51 compute-0 sudo[250638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:51 compute-0 sudo[250638]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:51 compute-0 sudo[250663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:26:51 compute-0 sudo[250663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:51 compute-0 podman[250699]: 2026-01-31 06:26:51.946302217 +0000 UTC m=+0.058756900 container create 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:26:51 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:51 compute-0 systemd[1]: Started libpod-conmon-8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c.scope.
Jan 31 06:26:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:51.918687597 +0000 UTC m=+0.031142300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:52.030615109 +0000 UTC m=+0.143069822 container init 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:52.038968485 +0000 UTC m=+0.151423168 container start 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:52.04480281 +0000 UTC m=+0.157257483 container attach 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 31 06:26:52 compute-0 systemd[1]: libpod-8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c.scope: Deactivated successfully.
Jan 31 06:26:52 compute-0 tender_dubinsky[250716]: 167 167
Jan 31 06:26:52 compute-0 conmon[250716]: conmon 8ac9b7317c19b829c34c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c.scope/container/memory.events
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:52.048035901 +0000 UTC m=+0.160490614 container died 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c2a08a373656deb95a6573463f821b1ba81289ce66f036a735f2cef99e9caa-merged.mount: Deactivated successfully.
Jan 31 06:26:52 compute-0 podman[250699]: 2026-01-31 06:26:52.120513629 +0000 UTC m=+0.232968312 container remove 8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_dubinsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:26:52 compute-0 systemd[1]: libpod-conmon-8ac9b7317c19b829c34cf72a140a4a618b5ecd0af2b74115d654d45911a1095c.scope: Deactivated successfully.
Jan 31 06:26:52 compute-0 podman[250740]: 2026-01-31 06:26:52.261256594 +0000 UTC m=+0.048190682 container create 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:26:52 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:26:52 compute-0 systemd[1]: Started libpod-conmon-4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e.scope.
Jan 31 06:26:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:52 compute-0 podman[250740]: 2026-01-31 06:26:52.239013386 +0000 UTC m=+0.025947504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:52 compute-0 podman[250740]: 2026-01-31 06:26:52.349549518 +0000 UTC m=+0.136483616 container init 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:26:52 compute-0 podman[250740]: 2026-01-31 06:26:52.354030635 +0000 UTC m=+0.140964713 container start 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 06:26:52 compute-0 podman[250740]: 2026-01-31 06:26:52.357194714 +0000 UTC m=+0.144128802 container attach 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:26:52 compute-0 exciting_cohen[250756]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:26:52 compute-0 exciting_cohen[250756]: --> All data devices are unavailable
Jan 31 06:26:52 compute-0 systemd[1]: libpod-4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e.scope: Deactivated successfully.
Jan 31 06:26:52 compute-0 podman[250776]: 2026-01-31 06:26:52.740289936 +0000 UTC m=+0.022433124 container died 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dab24f2404817b8870030b91a55b66bcf998d7f666a415cb21bef1a1b5971ed-merged.mount: Deactivated successfully.
Jan 31 06:26:52 compute-0 podman[250776]: 2026-01-31 06:26:52.796446183 +0000 UTC m=+0.078589351 container remove 4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:26:52 compute-0 systemd[1]: libpod-conmon-4b1f5f11f34101c86102cce190b5de4ffcbc392ac62ded2eb3ed139b729c8e3e.scope: Deactivated successfully.
Jan 31 06:26:52 compute-0 sudo[250663]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:52 compute-0 sudo[250791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:26:52 compute-0 sudo[250791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:52 compute-0 sudo[250791]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:52 compute-0 sudo[250816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:26:52 compute-0 sudo[250816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.219131593 +0000 UTC m=+0.030622656 container create a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:26:53 compute-0 systemd[1]: Started libpod-conmon-a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76.scope.
Jan 31 06:26:53 compute-0 ceph-mon[75251]: pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.287752391 +0000 UTC m=+0.099243474 container init a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.29336023 +0000 UTC m=+0.104851293 container start a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:26:53 compute-0 systemd[1]: libpod-a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76.scope: Deactivated successfully.
Jan 31 06:26:53 compute-0 silly_mclaren[250871]: 167 167
Jan 31 06:26:53 compute-0 conmon[250871]: conmon a4f8bd7aa6b82982b2c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76.scope/container/memory.events
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.296786086 +0000 UTC m=+0.108277169 container attach a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.297081405 +0000 UTC m=+0.108572468 container died a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.206198127 +0000 UTC m=+0.017689220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5cc11385f36eeac3a7bfbff578543263c84a3c51cf72aad37ab310e9323ac34-merged.mount: Deactivated successfully.
Jan 31 06:26:53 compute-0 podman[250853]: 2026-01-31 06:26:53.328165373 +0000 UTC m=+0.139656436 container remove a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mclaren, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:26:53 compute-0 systemd[1]: libpod-conmon-a4f8bd7aa6b82982b2c0c19b4ecb269c3ff64277e9db0978d636cae692e9cc76.scope: Deactivated successfully.
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.445790606 +0000 UTC m=+0.034971149 container create 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:26:53 compute-0 systemd[1]: Started libpod-conmon-71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5.scope.
Jan 31 06:26:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9f89436245150601e3db29f688ad7731afd813cde3807d8dbabb45a90752f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9f89436245150601e3db29f688ad7731afd813cde3807d8dbabb45a90752f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9f89436245150601e3db29f688ad7731afd813cde3807d8dbabb45a90752f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d9f89436245150601e3db29f688ad7731afd813cde3807d8dbabb45a90752f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.516183104 +0000 UTC m=+0.105363667 container init 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.520938428 +0000 UTC m=+0.110118971 container start 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.524655463 +0000 UTC m=+0.113836006 container attach 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.429778103 +0000 UTC m=+0.018958646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:53 compute-0 cool_murdock[250911]: {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     "0": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "devices": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "/dev/loop3"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             ],
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_name": "ceph_lv0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_size": "21470642176",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "name": "ceph_lv0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "tags": {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_name": "ceph",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.crush_device_class": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.encrypted": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.objectstore": "bluestore",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_id": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.vdo": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.with_tpm": "0"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             },
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "vg_name": "ceph_vg0"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         }
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     ],
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     "1": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "devices": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "/dev/loop4"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             ],
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_name": "ceph_lv1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_size": "21470642176",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "name": "ceph_lv1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "tags": {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_name": "ceph",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.crush_device_class": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.encrypted": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.objectstore": "bluestore",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_id": "1",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.vdo": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.with_tpm": "0"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             },
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "vg_name": "ceph_vg1"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         }
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     ],
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     "2": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "devices": [
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "/dev/loop5"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             ],
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_name": "ceph_lv2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_size": "21470642176",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "name": "ceph_lv2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "tags": {
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.cluster_name": "ceph",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.crush_device_class": "",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.encrypted": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.objectstore": "bluestore",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osd_id": "2",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.vdo": "0",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:                 "ceph.with_tpm": "0"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             },
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "type": "block",
Jan 31 06:26:53 compute-0 cool_murdock[250911]:             "vg_name": "ceph_vg2"
Jan 31 06:26:53 compute-0 cool_murdock[250911]:         }
Jan 31 06:26:53 compute-0 cool_murdock[250911]:     ]
Jan 31 06:26:53 compute-0 cool_murdock[250911]: }
Jan 31 06:26:53 compute-0 systemd[1]: libpod-71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5.scope: Deactivated successfully.
Jan 31 06:26:53 compute-0 podman[250894]: 2026-01-31 06:26:53.789813954 +0000 UTC m=+0.378994497 container died 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 06:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d9f89436245150601e3db29f688ad7731afd813cde3807d8dbabb45a90752f-merged.mount: Deactivated successfully.
Jan 31 06:26:53 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:26:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744456388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:26:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:26:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2744456388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:26:54 compute-0 podman[250894]: 2026-01-31 06:26:54.405934437 +0000 UTC m=+0.995115000 container remove 71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:26:54 compute-0 systemd[1]: libpod-conmon-71249a6cc26ac574000232050a1e0a1e52a51e525349f2cabec6f8e97e2778f5.scope: Deactivated successfully.
Jan 31 06:26:54 compute-0 sudo[250816]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2744456388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:26:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2744456388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:26:54 compute-0 sudo[250933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:26:54 compute-0 sudo[250933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:54 compute-0 sudo[250933]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:54 compute-0 sudo[250958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:26:54 compute-0 sudo[250958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:54 compute-0 podman[250996]: 2026-01-31 06:26:54.819586302 +0000 UTC m=+0.021687523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:54 compute-0 podman[250996]: 2026-01-31 06:26:54.942975078 +0000 UTC m=+0.145076279 container create c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:26:55 compute-0 systemd[1]: Started libpod-conmon-c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d.scope.
Jan 31 06:26:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:55 compute-0 podman[250996]: 2026-01-31 06:26:55.312070563 +0000 UTC m=+0.514171784 container init c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 06:26:55 compute-0 podman[250996]: 2026-01-31 06:26:55.317488276 +0000 UTC m=+0.519589477 container start c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:26:55 compute-0 fervent_goldwasser[251013]: 167 167
Jan 31 06:26:55 compute-0 systemd[1]: libpod-c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d.scope: Deactivated successfully.
Jan 31 06:26:55 compute-0 podman[250996]: 2026-01-31 06:26:55.426798835 +0000 UTC m=+0.628900036 container attach c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 06:26:55 compute-0 podman[250996]: 2026-01-31 06:26:55.427578747 +0000 UTC m=+0.629679948 container died c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 06:26:55 compute-0 ceph-mon[75251]: pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e159065f069c730a58ac2302c52876ef4c119ee2a5c46f8afb3e7e89b3788579-merged.mount: Deactivated successfully.
Jan 31 06:26:55 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.854998079401647e-06 of space, bias 4.0, pg target 0.0022259976952819765 quantized to 16 (current 16)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:26:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:26:56 compute-0 podman[250996]: 2026-01-31 06:26:56.073356838 +0000 UTC m=+1.275458039 container remove c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_goldwasser, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 06:26:56 compute-0 systemd[1]: libpod-conmon-c8a597818e7b7e537a4fc14ef6846d27a7d13144dbe4b2243fdc2bff5690c44d.scope: Deactivated successfully.
Jan 31 06:26:56 compute-0 podman[251037]: 2026-01-31 06:26:56.193495972 +0000 UTC m=+0.022303101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:26:56 compute-0 podman[251037]: 2026-01-31 06:26:56.295139544 +0000 UTC m=+0.123946643 container create 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:26:56 compute-0 systemd[1]: Started libpod-conmon-0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d.scope.
Jan 31 06:26:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5d22ba7df1a95fb3d20ef264a44eb29f8dde24b75d25bcecb46efe6dd2643c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5d22ba7df1a95fb3d20ef264a44eb29f8dde24b75d25bcecb46efe6dd2643c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5d22ba7df1a95fb3d20ef264a44eb29f8dde24b75d25bcecb46efe6dd2643c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af5d22ba7df1a95fb3d20ef264a44eb29f8dde24b75d25bcecb46efe6dd2643c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:26:56 compute-0 podman[251037]: 2026-01-31 06:26:56.998801121 +0000 UTC m=+0.827608270 container init 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:26:57 compute-0 podman[251037]: 2026-01-31 06:26:57.003479093 +0000 UTC m=+0.832286202 container start 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:26:57 compute-0 ceph-mon[75251]: pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:57 compute-0 podman[251037]: 2026-01-31 06:26:57.467023717 +0000 UTC m=+1.295830836 container attach 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 06:26:57 compute-0 lvm[251132]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:26:57 compute-0 lvm[251132]: VG ceph_vg0 finished
Jan 31 06:26:57 compute-0 lvm[251135]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:26:57 compute-0 lvm[251135]: VG ceph_vg1 finished
Jan 31 06:26:57 compute-0 lvm[251137]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:26:57 compute-0 lvm[251137]: VG ceph_vg2 finished
Jan 31 06:26:57 compute-0 lvm[251138]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:26:57 compute-0 lvm[251138]: VG ceph_vg1 finished
Jan 31 06:26:57 compute-0 lvm[251139]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:26:57 compute-0 lvm[251139]: VG ceph_vg1 finished
Jan 31 06:26:57 compute-0 jovial_fermat[251054]: {}
Jan 31 06:26:57 compute-0 systemd[1]: libpod-0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d.scope: Deactivated successfully.
Jan 31 06:26:57 compute-0 podman[251037]: 2026-01-31 06:26:57.779505794 +0000 UTC m=+1.608312893 container died 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:26:57 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:26:57 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-af5d22ba7df1a95fb3d20ef264a44eb29f8dde24b75d25bcecb46efe6dd2643c-merged.mount: Deactivated successfully.
Jan 31 06:26:58 compute-0 podman[251037]: 2026-01-31 06:26:58.648754098 +0000 UTC m=+2.477561197 container remove 0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:26:58 compute-0 systemd[1]: libpod-conmon-0c4421f57088aed0e4575cd1daff3f962d0d067d9ed325ba530b42ba1c76d31d.scope: Deactivated successfully.
Jan 31 06:26:58 compute-0 sudo[250958]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:58 compute-0 nova_compute[239679]: 2026-01-31 06:26:58.722 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:58 compute-0 nova_compute[239679]: 2026-01-31 06:26:58.724 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:26:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:26:58 compute-0 ceph-mon[75251]: pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:26:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:26:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:26:59 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:26:59 compute-0 sudo[251157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:26:59 compute-0 sudo[251157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:26:59 compute-0 sudo[251157]: pam_unix(sudo:session): session closed for user root
Jan 31 06:26:59 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:27:00 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.450 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.450 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.450 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.930 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.930 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.930 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.931 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.931 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.931 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.931 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.931 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:27:00 compute-0 nova_compute[239679]: 2026-01-31 06:27:00.932 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:01 compute-0 podman[251183]: 2026-01-31 06:27:01.117871476 +0000 UTC m=+0.044247281 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 06:27:01 compute-0 podman[251182]: 2026-01-31 06:27:01.138875309 +0000 UTC m=+0.065201673 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.262 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.263 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.264 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.264 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.264 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:27:01 compute-0 ceph-mon[75251]: pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:27:01 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667318232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.767 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.876 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.877 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.877 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:27:01 compute-0 nova_compute[239679]: 2026-01-31 06:27:01.877 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:27:01 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:02 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/667318232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:02 compute-0 nova_compute[239679]: 2026-01-31 06:27:02.886 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:27:02 compute-0 nova_compute[239679]: 2026-01-31 06:27:02.887 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:27:02 compute-0 nova_compute[239679]: 2026-01-31 06:27:02.904 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:27:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:27:03 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493218918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:03 compute-0 nova_compute[239679]: 2026-01-31 06:27:03.487 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:27:03 compute-0 nova_compute[239679]: 2026-01-31 06:27:03.491 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:27:03 compute-0 ceph-mon[75251]: pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3493218918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:03 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:04 compute-0 nova_compute[239679]: 2026-01-31 06:27:04.512 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:27:04 compute-0 nova_compute[239679]: 2026-01-31 06:27:04.514 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:27:04 compute-0 nova_compute[239679]: 2026-01-31 06:27:04.514 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:27:04 compute-0 ceph-mon[75251]: pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:05 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:07 compute-0 ceph-mon[75251]: pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:07 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:08 compute-0 ceph-mon[75251]: pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:09 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:11 compute-0 ceph-mon[75251]: pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:11 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:13 compute-0 ceph-mon[75251]: pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:13 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:14 compute-0 ceph-mon[75251]: pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:15 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:17 compute-0 ceph-mon[75251]: pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:17 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:19 compute-0 ceph-mon[75251]: pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:19 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:21 compute-0 ceph-mon[75251]: pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:21 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:23 compute-0 ceph-mon[75251]: pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:23 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:25 compute-0 ceph-mon[75251]: pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:25 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.215043) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846215091, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3578573, "memory_usage": 3637216, "flush_reason": "Manual Compaction"}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846233198, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3489382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21098, "largest_seqno": 23150, "table_properties": {"data_size": 3480026, "index_size": 5914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18498, "raw_average_key_size": 19, "raw_value_size": 3461504, "raw_average_value_size": 3730, "num_data_blocks": 268, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840618, "oldest_key_time": 1769840618, "file_creation_time": 1769840846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18185 microseconds, and 4869 cpu microseconds.
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.233242) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3489382 bytes OK
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.233261) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.235813) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.235830) EVENT_LOG_v1 {"time_micros": 1769840846235824, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.235850) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3569975, prev total WAL file size 3569975, number of live WAL files 2.
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.236651) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3407KB)], [50(7726KB)]
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846236740, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11401759, "oldest_snapshot_seqno": -1}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4818 keys, 9623823 bytes, temperature: kUnknown
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846298870, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9623823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9588796, "index_size": 21861, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12101, "raw_key_size": 117950, "raw_average_key_size": 24, "raw_value_size": 9498976, "raw_average_value_size": 1971, "num_data_blocks": 919, "num_entries": 4818, "num_filter_entries": 4818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.299089) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9623823 bytes
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.300520) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.3 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5332, records dropped: 514 output_compression: NoCompression
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.300534) EVENT_LOG_v1 {"time_micros": 1769840846300527, "job": 26, "event": "compaction_finished", "compaction_time_micros": 62199, "compaction_time_cpu_micros": 15701, "output_level": 6, "num_output_files": 1, "total_output_size": 9623823, "num_input_records": 5332, "num_output_records": 4818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846300923, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840846301458, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.236579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.301480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.301484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.301485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.301486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:26 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:27:26.301488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:27:27 compute-0 ceph-mon[75251]: pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:27 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:29 compute-0 ceph-mon[75251]: pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:29 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:31 compute-0 ceph-mon[75251]: pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:31 compute-0 podman[251272]: 2026-01-31 06:27:31.309955874 +0000 UTC m=+0.048960650 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:27:31 compute-0 podman[251271]: 2026-01-31 06:27:31.327016635 +0000 UTC m=+0.068944214 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:27:31 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:32 compute-0 ceph-mon[75251]: pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 06:27:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 06:27:33 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:34 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 06:27:34 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 06:27:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 06:27:35 compute-0 ceph-mon[75251]: pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:35 compute-0 ceph-mon[75251]: osdmap e134: 3 total, 3 up, 3 in
Jan 31 06:27:35 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 06:27:35 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 06:27:35 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 06:27:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 06:27:36 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 06:27:36 compute-0 ceph-mon[75251]: osdmap e135: 3 total, 3 up, 3 in
Jan 31 06:27:37 compute-0 ceph-mon[75251]: pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:37 compute-0 ceph-mon[75251]: osdmap e136: 3 total, 3 up, 3 in
Jan 31 06:27:37 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 21 op/s
Jan 31 06:27:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:27:39 compute-0 ceph-mon[75251]: pgmap v1143: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 MiB/s wr, 21 op/s
Jan 31 06:27:39 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 31 06:27:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 06:27:40 compute-0 ceph-mon[75251]: pgmap v1144: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 31 06:27:40 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 06:27:40 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 06:27:41 compute-0 ceph-mon[75251]: osdmap e137: 3 total, 3 up, 3 in
Jan 31 06:27:41 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 53 op/s
Jan 31 06:27:42 compute-0 ceph-mon[75251]: pgmap v1146: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 4.1 MiB/s wr, 53 op/s
Jan 31 06:27:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:27:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 06:27:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 06:27:43 compute-0 ceph-mon[75251]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 06:27:43 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 06:27:44 compute-0 ceph-mon[75251]: osdmap e138: 3 total, 3 up, 3 in
Jan 31 06:27:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:27:44
Jan 31 06:27:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:27:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:27:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.log', 'images', 'volumes', '.rgw.root', 'default.rgw.meta']
Jan 31 06:27:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:27:45 compute-0 ceph-mon[75251]: pgmap v1148: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:27:45 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.6 MiB/s wr, 31 op/s
Jan 31 06:27:47 compute-0 ceph-mon[75251]: pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.6 MiB/s wr, 31 op/s
Jan 31 06:27:47 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 31 06:27:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:27:49 compute-0 ceph-mon[75251]: pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Jan 31 06:27:49 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Jan 31 06:27:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:27:50.224 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:27:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:27:50.224 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:27:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:27:50.224 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:27:51 compute-0 ceph-mon[75251]: pgmap v1151: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Jan 31 06:27:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 1.2 MiB/s wr, 1 op/s
Jan 31 06:27:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:27:53 compute-0 nova_compute[239679]: 2026-01-31 06:27:53.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:53 compute-0 nova_compute[239679]: 2026-01-31 06:27:53.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:53 compute-0 nova_compute[239679]: 2026-01-31 06:27:53.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:53 compute-0 nova_compute[239679]: 2026-01-31 06:27:53.508 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:27:53 compute-0 nova_compute[239679]: 2026-01-31 06:27:53.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:53 compute-0 ceph-mon[75251]: pgmap v1152: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 1.2 MiB/s wr, 1 op/s
Jan 31 06:27:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 654 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 31 06:27:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:27:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65669722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:27:54 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:27:54 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/65669722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:27:55 compute-0 ceph-mon[75251]: pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 654 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 31 06:27:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/65669722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:27:55 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/65669722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:56 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658956393945913 of space, bias 1.0, pg target 0.1997686918183774 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8582738702586133e-06 of space, bias 4.0, pg target 0.002229928644310336 quantized to 16 (current 16)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:27:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.223 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.224 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.224 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.224 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.224 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:27:56 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:27:56 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220788799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.703 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.820 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.821 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.821 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:27:56 compute-0 nova_compute[239679]: 2026-01-31 06:27:56.821 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:27:57 compute-0 ceph-mon[75251]: pgmap v1154: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:57 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2220788799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:27:58 compute-0 nova_compute[239679]: 2026-01-31 06:27:58.772 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:27:58 compute-0 nova_compute[239679]: 2026-01-31 06:27:58.773 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:27:58 compute-0 nova_compute[239679]: 2026-01-31 06:27:58.789 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:27:58 compute-0 ceph-mon[75251]: pgmap v1155: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:27:59 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:27:59 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256883324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.267 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.272 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:27:59 compute-0 sudo[251359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:27:59 compute-0 sudo[251359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:27:59 compute-0 sudo[251359]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:59 compute-0 sudo[251386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 06:27:59 compute-0 sudo[251386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.762 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.765 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.765 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.766 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:27:59 compute-0 nova_compute[239679]: 2026-01-31 06:27:59.766 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 06:28:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:00 compute-0 podman[251456]: 2026-01-31 06:28:00.047538585 +0000 UTC m=+0.414570483 container exec 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:28:00 compute-0 nova_compute[239679]: 2026-01-31 06:28:00.111 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 06:28:00 compute-0 nova_compute[239679]: 2026-01-31 06:28:00.111 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:00 compute-0 nova_compute[239679]: 2026-01-31 06:28:00.111 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 06:28:00 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/256883324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:28:00 compute-0 podman[251477]: 2026-01-31 06:28:00.246315017 +0000 UTC m=+0.110409893 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:28:00 compute-0 podman[251456]: 2026-01-31 06:28:00.34084767 +0000 UTC m=+0.707879538 container exec_died 57c5f39a87653a9b431cceb5df1861cedd54d59e7063f6580ed7f53c576e101d (image=quay.io/ceph/ceph:v20, name=ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:28:01 compute-0 sudo[251386]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:28:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:28:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:01 compute-0 sudo[251644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:28:01 compute-0 ceph-mon[75251]: pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:01 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:01 compute-0 sudo[251644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:01 compute-0 sudo[251644]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.417 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.417 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.418 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.418 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:28:01 compute-0 sudo[251681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:28:01 compute-0 podman[251669]: 2026-01-31 06:28:01.445858339 +0000 UTC m=+0.059526409 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 31 06:28:01 compute-0 sudo[251681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:01 compute-0 podman[251668]: 2026-01-31 06:28:01.495399175 +0000 UTC m=+0.108984912 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.542 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.543 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.543 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:01 compute-0 nova_compute[239679]: 2026-01-31 06:28:01.543 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:01 compute-0 anacron[120286]: Job `cron.daily' started
Jan 31 06:28:01 compute-0 anacron[120286]: Job `cron.daily' terminated
Jan 31 06:28:01 compute-0 sudo[251681]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:28:01 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:28:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:28:01 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:28:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:28:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:28:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:28:02 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:28:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:28:02 compute-0 sudo[251770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:28:02 compute-0 sudo[251770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:02 compute-0 sudo[251770]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:02 compute-0 sudo[251795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:28:02 compute-0 sudo[251795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.427361936 +0000 UTC m=+0.111935556 container create c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.332617786 +0000 UTC m=+0.017191436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:28:02 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:28:02 compute-0 systemd[1]: Started libpod-conmon-c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391.scope.
Jan 31 06:28:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.840046175 +0000 UTC m=+0.524619815 container init c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.845530029 +0000 UTC m=+0.530103649 container start c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 06:28:02 compute-0 zealous_cerf[251850]: 167 167
Jan 31 06:28:02 compute-0 systemd[1]: libpod-c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391.scope: Deactivated successfully.
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.917405215 +0000 UTC m=+0.601978925 container attach c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:28:02 compute-0 podman[251833]: 2026-01-31 06:28:02.918449704 +0000 UTC m=+0.603023394 container died c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:28:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:03 compute-0 nova_compute[239679]: 2026-01-31 06:28:03.508 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:03 compute-0 ceph-mon[75251]: pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-65103672de1be1dd07c8c74f2b06cce37746752760c51e70bc913bf3375d77bd-merged.mount: Deactivated successfully.
Jan 31 06:28:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:04 compute-0 podman[251833]: 2026-01-31 06:28:04.179989184 +0000 UTC m=+1.864562804 container remove c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 06:28:04 compute-0 systemd[1]: libpod-conmon-c2262b41bfc8d87573e904e739d2422e9cf7dbf7200c41defe1294398e0c3391.scope: Deactivated successfully.
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.310964934 +0000 UTC m=+0.044188456 container create a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.28916514 +0000 UTC m=+0.022388692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:04 compute-0 systemd[1]: Started libpod-conmon-a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c.scope.
Jan 31 06:28:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.540001769 +0000 UTC m=+0.273225301 container init a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.544923477 +0000 UTC m=+0.278146999 container start a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.740499558 +0000 UTC m=+0.473723100 container attach a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 06:28:04 compute-0 busy_johnson[251892]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:28:04 compute-0 busy_johnson[251892]: --> All data devices are unavailable
Jan 31 06:28:04 compute-0 systemd[1]: libpod-a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c.scope: Deactivated successfully.
Jan 31 06:28:04 compute-0 podman[251875]: 2026-01-31 06:28:04.909671115 +0000 UTC m=+0.642894637 container died a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 06:28:05 compute-0 ceph-mon[75251]: pgmap v1158: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc72fc790639c861a081f8556af642e4609c20e748a5aa8cdf92c16e0b628b63-merged.mount: Deactivated successfully.
Jan 31 06:28:05 compute-0 sshd-session[251924]: Invalid user solana from 45.148.10.240 port 52154
Jan 31 06:28:05 compute-0 sshd-session[251924]: Connection closed by invalid user solana 45.148.10.240 port 52154 [preauth]
Jan 31 06:28:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:06 compute-0 podman[251875]: 2026-01-31 06:28:06.084040737 +0000 UTC m=+1.817264289 container remove a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_johnson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 06:28:06 compute-0 systemd[1]: libpod-conmon-a3f05850f7b1f92cf905d537af1145b1e218b1a677d1b11072305a684ca0262c.scope: Deactivated successfully.
Jan 31 06:28:06 compute-0 sudo[251795]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:06 compute-0 sudo[251926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:28:06 compute-0 sudo[251926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:06 compute-0 sudo[251926]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:06 compute-0 sudo[251951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:28:06 compute-0 sudo[251951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.492443685 +0000 UTC m=+0.044922547 container create cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:28:06 compute-0 systemd[1]: Started libpod-conmon-cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4.scope.
Jan 31 06:28:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.544721889 +0000 UTC m=+0.097200781 container init cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.549772741 +0000 UTC m=+0.102251613 container start cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:28:06 compute-0 strange_turing[252005]: 167 167
Jan 31 06:28:06 compute-0 systemd[1]: libpod-cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4.scope: Deactivated successfully.
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.556069908 +0000 UTC m=+0.108548830 container attach cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.557008195 +0000 UTC m=+0.109487077 container died cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.471977389 +0000 UTC m=+0.024456391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-41028ed4be395162b3de9b36550bb9fd53cacbd351ae9a8351c5c7462dbce5f8-merged.mount: Deactivated successfully.
Jan 31 06:28:06 compute-0 podman[251989]: 2026-01-31 06:28:06.60119548 +0000 UTC m=+0.153674382 container remove cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 06:28:06 compute-0 systemd[1]: libpod-conmon-cc503ad8ad233c5b86a0e895729be2e4665aa5d1361225af47bd8058bd101aa4.scope: Deactivated successfully.
Jan 31 06:28:06 compute-0 podman[252030]: 2026-01-31 06:28:06.727949372 +0000 UTC m=+0.039966658 container create 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:28:06 compute-0 systemd[1]: Started libpod-conmon-9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8.scope.
Jan 31 06:28:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e4ce4d4388ce03441b7a262279a8d310484e157fad4fd1c7e713098b0d639/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e4ce4d4388ce03441b7a262279a8d310484e157fad4fd1c7e713098b0d639/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e4ce4d4388ce03441b7a262279a8d310484e157fad4fd1c7e713098b0d639/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e4ce4d4388ce03441b7a262279a8d310484e157fad4fd1c7e713098b0d639/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:06 compute-0 podman[252030]: 2026-01-31 06:28:06.789437384 +0000 UTC m=+0.101454720 container init 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 06:28:06 compute-0 podman[252030]: 2026-01-31 06:28:06.796843873 +0000 UTC m=+0.108861199 container start 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:28:06 compute-0 podman[252030]: 2026-01-31 06:28:06.80203997 +0000 UTC m=+0.114057276 container attach 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:28:06 compute-0 podman[252030]: 2026-01-31 06:28:06.710925432 +0000 UTC m=+0.022942748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:07 compute-0 crazy_faraday[252047]: {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     "0": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "devices": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "/dev/loop3"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             ],
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_name": "ceph_lv0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_size": "21470642176",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "name": "ceph_lv0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "tags": {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_name": "ceph",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.crush_device_class": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.encrypted": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.objectstore": "bluestore",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_id": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.vdo": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.with_tpm": "0"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             },
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "vg_name": "ceph_vg0"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         }
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     ],
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     "1": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "devices": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "/dev/loop4"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             ],
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_name": "ceph_lv1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_size": "21470642176",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "name": "ceph_lv1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "tags": {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_name": "ceph",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.crush_device_class": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.encrypted": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.objectstore": "bluestore",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_id": "1",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.vdo": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.with_tpm": "0"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             },
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "vg_name": "ceph_vg1"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         }
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     ],
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     "2": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "devices": [
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "/dev/loop5"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             ],
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_name": "ceph_lv2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_size": "21470642176",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "name": "ceph_lv2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "tags": {
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.cluster_name": "ceph",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.crush_device_class": "",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.encrypted": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.objectstore": "bluestore",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osd_id": "2",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.vdo": "0",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:                 "ceph.with_tpm": "0"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             },
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "type": "block",
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:             "vg_name": "ceph_vg2"
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:         }
Jan 31 06:28:07 compute-0 crazy_faraday[252047]:     ]
Jan 31 06:28:07 compute-0 crazy_faraday[252047]: }
Jan 31 06:28:07 compute-0 systemd[1]: libpod-9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8.scope: Deactivated successfully.
Jan 31 06:28:07 compute-0 podman[252030]: 2026-01-31 06:28:07.082584365 +0000 UTC m=+0.394601691 container died 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:28:07 compute-0 ceph-mon[75251]: pgmap v1159: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4e4ce4d4388ce03441b7a262279a8d310484e157fad4fd1c7e713098b0d639-merged.mount: Deactivated successfully.
Jan 31 06:28:07 compute-0 podman[252030]: 2026-01-31 06:28:07.125545496 +0000 UTC m=+0.437562792 container remove 9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 06:28:07 compute-0 systemd[1]: libpod-conmon-9a8b5f3ac62e7fbf98ec7144dffdf360f990baa9a1d2270b8252e66c2fd43ce8.scope: Deactivated successfully.
Jan 31 06:28:07 compute-0 sudo[251951]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:07 compute-0 sudo[252069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:28:07 compute-0 sudo[252069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:07 compute-0 sudo[252069]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:07 compute-0 sudo[252094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:28:07 compute-0 sudo[252094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.571229524 +0000 UTC m=+0.052697156 container create 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:28:07 compute-0 systemd[1]: Started libpod-conmon-0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759.scope.
Jan 31 06:28:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.631420461 +0000 UTC m=+0.112888173 container init 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.639460457 +0000 UTC m=+0.120928119 container start 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.548460883 +0000 UTC m=+0.029928605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.643674196 +0000 UTC m=+0.125141868 container attach 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:28:07 compute-0 angry_spence[252147]: 167 167
Jan 31 06:28:07 compute-0 systemd[1]: libpod-0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759.scope: Deactivated successfully.
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.645186008 +0000 UTC m=+0.126653700 container died 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e402112ddd3355ae49293725566254cc48b68afd7dbf903ad41a5102694f3f05-merged.mount: Deactivated successfully.
Jan 31 06:28:07 compute-0 podman[252131]: 2026-01-31 06:28:07.682899041 +0000 UTC m=+0.164366703 container remove 0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 06:28:07 compute-0 systemd[1]: libpod-conmon-0b20bca2872d1c157030fe4918c0b53856757080dae699792277276a854a1759.scope: Deactivated successfully.
Jan 31 06:28:07 compute-0 podman[252171]: 2026-01-31 06:28:07.842834878 +0000 UTC m=+0.061822653 container create d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:28:07 compute-0 systemd[1]: Started libpod-conmon-d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb.scope.
Jan 31 06:28:07 compute-0 podman[252171]: 2026-01-31 06:28:07.801067141 +0000 UTC m=+0.020054926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:28:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565e7f5ee657bd648317ad1576f86b9918cf3cd02143a699fccf373a2b0dba03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565e7f5ee657bd648317ad1576f86b9918cf3cd02143a699fccf373a2b0dba03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565e7f5ee657bd648317ad1576f86b9918cf3cd02143a699fccf373a2b0dba03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565e7f5ee657bd648317ad1576f86b9918cf3cd02143a699fccf373a2b0dba03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:28:07 compute-0 podman[252171]: 2026-01-31 06:28:07.922661597 +0000 UTC m=+0.141649372 container init d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 06:28:07 compute-0 podman[252171]: 2026-01-31 06:28:07.927464063 +0000 UTC m=+0.146451818 container start d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:28:07 compute-0 podman[252171]: 2026-01-31 06:28:07.932631298 +0000 UTC m=+0.151619053 container attach d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:28:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:08 compute-0 lvm[252268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:28:08 compute-0 lvm[252268]: VG ceph_vg1 finished
Jan 31 06:28:08 compute-0 lvm[252265]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:28:08 compute-0 lvm[252265]: VG ceph_vg0 finished
Jan 31 06:28:08 compute-0 lvm[252269]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:28:08 compute-0 lvm[252269]: VG ceph_vg2 finished
Jan 31 06:28:08 compute-0 thirsty_hypatia[252188]: {}
Jan 31 06:28:08 compute-0 systemd[1]: libpod-d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb.scope: Deactivated successfully.
Jan 31 06:28:08 compute-0 systemd[1]: libpod-d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb.scope: Consumed 1.075s CPU time.
Jan 31 06:28:08 compute-0 podman[252171]: 2026-01-31 06:28:08.67081929 +0000 UTC m=+0.889807045 container died d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-565e7f5ee657bd648317ad1576f86b9918cf3cd02143a699fccf373a2b0dba03-merged.mount: Deactivated successfully.
Jan 31 06:28:08 compute-0 podman[252171]: 2026-01-31 06:28:08.711867886 +0000 UTC m=+0.930855631 container remove d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 06:28:08 compute-0 systemd[1]: libpod-conmon-d2be664619f029e848ca1667fe63dcee4a67d6fdd149f5d53ca3733e4e94a2fb.scope: Deactivated successfully.
Jan 31 06:28:08 compute-0 sudo[252094]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:28:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:28:08 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:08 compute-0 sudo[252286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:28:08 compute-0 sudo[252286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:28:08 compute-0 sudo[252286]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:09 compute-0 ceph-mon[75251]: pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:09 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:28:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:11 compute-0 ceph-mon[75251]: pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:13 compute-0 ceph-mon[75251]: pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:15 compute-0 ceph-mon[75251]: pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:17 compute-0 ceph-mon[75251]: pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:19 compute-0 ceph-mon[75251]: pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:21 compute-0 ceph-mon[75251]: pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:22 compute-0 ceph-mon[75251]: pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:25 compute-0 ceph-mon[75251]: pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:26 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:27 compute-0 ceph-mon[75251]: pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:28 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:28 compute-0 ceph-mon[75251]: pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:30 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:31 compute-0 ceph-mon[75251]: pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:32 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:32 compute-0 podman[252312]: 2026-01-31 06:28:32.125056123 +0000 UTC m=+0.046667786 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 06:28:32 compute-0 podman[252311]: 2026-01-31 06:28:32.203304298 +0000 UTC m=+0.124930601 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 06:28:32 compute-0 ceph-mon[75251]: pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:34 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:35 compute-0 ceph-mon[75251]: pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:36 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:37 compute-0 ceph-mon[75251]: pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:38 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:39 compute-0 ceph-mon[75251]: pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:40 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:41 compute-0 ceph-mon[75251]: pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:42 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:42 compute-0 ceph-mon[75251]: pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:28:44
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'volumes', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log']
Jan 31 06:28:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:28:45 compute-0 ceph-mon[75251]: pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:28:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:28:46 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:47 compute-0 ceph-mon[75251]: pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:48 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:49 compute-0 ceph-mon[75251]: pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:50 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:28:50.225 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:28:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:28:50.226 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:28:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:28:50.226 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:28:50 compute-0 ceph-mon[75251]: pgmap v1181: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:51 compute-0 nova_compute[239679]: 2026-01-31 06:28:51.718 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:51 compute-0 nova_compute[239679]: 2026-01-31 06:28:51.719 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:28:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:53 compute-0 ceph-mon[75251]: pgmap v1182: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:53 compute-0 nova_compute[239679]: 2026-01-31 06:28:53.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:53 compute-0 nova_compute[239679]: 2026-01-31 06:28:53.507 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:28:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:55 compute-0 ceph-mon[75251]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658956393945913 of space, bias 1.0, pg target 0.1997686918183774 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8582738702586133e-06 of space, bias 4.0, pg target 0.002229928644310336 quantized to 16 (current 16)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:28:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:28:56 compute-0 ceph-mon[75251]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:28:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:28:59 compute-0 ceph-mon[75251]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:01 compute-0 ceph-mon[75251]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:02 compute-0 ceph-mon[75251]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:03 compute-0 podman[252360]: 2026-01-31 06:29:03.130282083 +0000 UTC m=+0.051435870 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 06:29:03 compute-0 podman[252359]: 2026-01-31 06:29:03.184664135 +0000 UTC m=+0.109827925 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 06:29:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:05 compute-0 ceph-mon[75251]: pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:07 compute-0 ceph-mon[75251]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:08 compute-0 sudo[252405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:29:08 compute-0 sudo[252405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:08 compute-0 sudo[252405]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:08 compute-0 sudo[252430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:29:08 compute-0 sudo[252430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:09 compute-0 sudo[252430]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:29:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:29:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:29:09 compute-0 sudo[252487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:29:09 compute-0 sudo[252487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:09 compute-0 sudo[252487]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:09 compute-0 sudo[252512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:29:09 compute-0 sudo[252512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.643474458 +0000 UTC m=+0.036075058 container create 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 06:29:09 compute-0 systemd[1]: Started libpod-conmon-840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d.scope.
Jan 31 06:29:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.707962375 +0000 UTC m=+0.100563005 container init 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.712844022 +0000 UTC m=+0.105444622 container start 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 06:29:09 compute-0 infallible_ride[252562]: 167 167
Jan 31 06:29:09 compute-0 systemd[1]: libpod-840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d.scope: Deactivated successfully.
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.718564214 +0000 UTC m=+0.111164814 container attach 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.719181961 +0000 UTC m=+0.111782621 container died 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.626272313 +0000 UTC m=+0.018872943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-028ae704ac95e3c9f144694c4bd66f0987753b2f41d9bfc76319559b34b672e4-merged.mount: Deactivated successfully.
Jan 31 06:29:09 compute-0 podman[252546]: 2026-01-31 06:29:09.760904577 +0000 UTC m=+0.153505197 container remove 840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:29:09 compute-0 systemd[1]: libpod-conmon-840724b6149347b2d3be59e06d775a82d62fbdd9f35c591f4a63cfd54043f80d.scope: Deactivated successfully.
Jan 31 06:29:09 compute-0 podman[252587]: 2026-01-31 06:29:09.916933173 +0000 UTC m=+0.048457237 container create 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:29:09 compute-0 systemd[1]: Started libpod-conmon-29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7.scope.
Jan 31 06:29:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:09 compute-0 podman[252587]: 2026-01-31 06:29:09.891820885 +0000 UTC m=+0.023344959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:10 compute-0 podman[252587]: 2026-01-31 06:29:10.039241929 +0000 UTC m=+0.170765993 container init 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 06:29:10 compute-0 podman[252587]: 2026-01-31 06:29:10.044426205 +0000 UTC m=+0.175950259 container start 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:29:10 compute-0 podman[252587]: 2026-01-31 06:29:10.179490681 +0000 UTC m=+0.311014745 container attach 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:29:10 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:29:10 compute-0 competent_pare[252603]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:29:10 compute-0 competent_pare[252603]: --> All data devices are unavailable
Jan 31 06:29:10 compute-0 systemd[1]: libpod-29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7.scope: Deactivated successfully.
Jan 31 06:29:10 compute-0 podman[252587]: 2026-01-31 06:29:10.474235167 +0000 UTC m=+0.605759221 container died 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 06:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-20560fce2c044581824fc8b4de924c4f8b03a6c044e04662ab20eb2f532356e4-merged.mount: Deactivated successfully.
Jan 31 06:29:10 compute-0 podman[252587]: 2026-01-31 06:29:10.655285819 +0000 UTC m=+0.786809873 container remove 29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:29:10 compute-0 systemd[1]: libpod-conmon-29e8717a356a1fa58989b08b211b6b1ab0925323e3cf622b9c25677073d3ccd7.scope: Deactivated successfully.
Jan 31 06:29:10 compute-0 sudo[252512]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:10 compute-0 sudo[252634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:29:10 compute-0 sudo[252634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:10 compute-0 sudo[252634]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:10 compute-0 sudo[252659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:29:10 compute-0 sudo[252659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:10 compute-0 podman[252697]: 2026-01-31 06:29:10.992899703 +0000 UTC m=+0.032152147 container create 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 06:29:11 compute-0 systemd[1]: Started libpod-conmon-20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028.scope.
Jan 31 06:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:11.057594156 +0000 UTC m=+0.096846660 container init 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:11.064675755 +0000 UTC m=+0.103928199 container start 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:11.067996159 +0000 UTC m=+0.107248613 container attach 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:29:11 compute-0 elated_nash[252713]: 167 167
Jan 31 06:29:11 compute-0 systemd[1]: libpod-20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028.scope: Deactivated successfully.
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:11.071094176 +0000 UTC m=+0.110346660 container died 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:10.977672933 +0000 UTC m=+0.016925387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c57d1734eb95ad9496179c2ef933aa80fb3f597524eca2634493834ec9a70a06-merged.mount: Deactivated successfully.
Jan 31 06:29:11 compute-0 podman[252697]: 2026-01-31 06:29:11.122093153 +0000 UTC m=+0.161345637 container remove 20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_nash, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:29:11 compute-0 systemd[1]: libpod-conmon-20030aa97a195903306738a7dfa120049e8adcfe316c0d5af3d92fc503a9d028.scope: Deactivated successfully.
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.244339098 +0000 UTC m=+0.043762564 container create a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 06:29:11 compute-0 ceph-mon[75251]: pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:11 compute-0 systemd[1]: Started libpod-conmon-a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd.scope.
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.222518423 +0000 UTC m=+0.021941879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db3d92364d77435ff16056faa0ee5925cb4eb5ec1f13ae9cb5c3a01f6fea9f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db3d92364d77435ff16056faa0ee5925cb4eb5ec1f13ae9cb5c3a01f6fea9f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db3d92364d77435ff16056faa0ee5925cb4eb5ec1f13ae9cb5c3a01f6fea9f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db3d92364d77435ff16056faa0ee5925cb4eb5ec1f13ae9cb5c3a01f6fea9f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.360191212 +0000 UTC m=+0.159614688 container init a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.366991544 +0000 UTC m=+0.166414990 container start a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.386485953 +0000 UTC m=+0.185909469 container attach a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 06:29:11 compute-0 romantic_albattani[252753]: {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     "0": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "devices": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "/dev/loop3"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             ],
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_name": "ceph_lv0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_size": "21470642176",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "name": "ceph_lv0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "tags": {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_name": "ceph",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.crush_device_class": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.encrypted": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.objectstore": "bluestore",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_id": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.vdo": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.with_tpm": "0"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             },
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "vg_name": "ceph_vg0"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         }
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     ],
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     "1": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "devices": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "/dev/loop4"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             ],
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_name": "ceph_lv1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_size": "21470642176",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "name": "ceph_lv1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "tags": {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_name": "ceph",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.crush_device_class": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.encrypted": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.objectstore": "bluestore",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_id": "1",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.vdo": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.with_tpm": "0"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             },
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "vg_name": "ceph_vg1"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         }
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     ],
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     "2": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "devices": [
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "/dev/loop5"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             ],
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_name": "ceph_lv2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_size": "21470642176",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "name": "ceph_lv2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "tags": {
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.cluster_name": "ceph",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.crush_device_class": "",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.encrypted": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.objectstore": "bluestore",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osd_id": "2",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.vdo": "0",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:                 "ceph.with_tpm": "0"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             },
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "type": "block",
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:             "vg_name": "ceph_vg2"
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:         }
Jan 31 06:29:11 compute-0 romantic_albattani[252753]:     ]
Jan 31 06:29:11 compute-0 romantic_albattani[252753]: }
Jan 31 06:29:11 compute-0 systemd[1]: libpod-a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd.scope: Deactivated successfully.
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.709893937 +0000 UTC m=+0.509317363 container died a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db3d92364d77435ff16056faa0ee5925cb4eb5ec1f13ae9cb5c3a01f6fea9f2-merged.mount: Deactivated successfully.
Jan 31 06:29:11 compute-0 podman[252737]: 2026-01-31 06:29:11.747958589 +0000 UTC m=+0.547382005 container remove a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 06:29:11 compute-0 systemd[1]: libpod-conmon-a83e7ad94c4c07d1c51fdf7411be3e03185848c0ed0d2ec72dbece59d4deabfd.scope: Deactivated successfully.
Jan 31 06:29:11 compute-0 sudo[252659]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:11 compute-0 sudo[252773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:29:11 compute-0 sudo[252773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:11 compute-0 sudo[252773]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:11 compute-0 sudo[252798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:29:11 compute-0 sudo[252798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.125664823 +0000 UTC m=+0.044229798 container create 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 06:29:12 compute-0 systemd[1]: Started libpod-conmon-6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33.scope.
Jan 31 06:29:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.18908587 +0000 UTC m=+0.107650875 container init 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.193980538 +0000 UTC m=+0.112545513 container start 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:29:12 compute-0 objective_hypatia[252852]: 167 167
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.197096486 +0000 UTC m=+0.115661481 container attach 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:29:12 compute-0 systemd[1]: libpod-6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33.scope: Deactivated successfully.
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.198263769 +0000 UTC m=+0.116828744 container died 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.111263177 +0000 UTC m=+0.029828172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69ec6d8093d499c2718e36614e3fec4466b2db9b9ef8384de91b069b465a4be-merged.mount: Deactivated successfully.
Jan 31 06:29:12 compute-0 podman[252836]: 2026-01-31 06:29:12.236226728 +0000 UTC m=+0.154791703 container remove 6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hypatia, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:29:12 compute-0 systemd[1]: libpod-conmon-6a0fa604031baca8d0856d7148e27ba785dd67cc79f4127b347329882cf2dd33.scope: Deactivated successfully.
Jan 31 06:29:12 compute-0 podman[252877]: 2026-01-31 06:29:12.343541542 +0000 UTC m=+0.033774792 container create e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:29:12 compute-0 systemd[1]: Started libpod-conmon-e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152.scope.
Jan 31 06:29:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574322739d3f7f8ec63af05c875e765f0ffd1f29a77c50cf2c6debb3ac0bcd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574322739d3f7f8ec63af05c875e765f0ffd1f29a77c50cf2c6debb3ac0bcd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574322739d3f7f8ec63af05c875e765f0ffd1f29a77c50cf2c6debb3ac0bcd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574322739d3f7f8ec63af05c875e765f0ffd1f29a77c50cf2c6debb3ac0bcd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:29:12 compute-0 podman[252877]: 2026-01-31 06:29:12.413211766 +0000 UTC m=+0.103445026 container init e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:29:12 compute-0 podman[252877]: 2026-01-31 06:29:12.418734221 +0000 UTC m=+0.108967471 container start e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:29:12 compute-0 podman[252877]: 2026-01-31 06:29:12.422083906 +0000 UTC m=+0.112317186 container attach e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:29:12 compute-0 podman[252877]: 2026-01-31 06:29:12.328838138 +0000 UTC m=+0.019071408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:29:13 compute-0 lvm[252973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:29:13 compute-0 lvm[252974]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:29:13 compute-0 lvm[252973]: VG ceph_vg0 finished
Jan 31 06:29:13 compute-0 lvm[252974]: VG ceph_vg1 finished
Jan 31 06:29:13 compute-0 lvm[252976]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:29:13 compute-0 lvm[252976]: VG ceph_vg2 finished
Jan 31 06:29:13 compute-0 exciting_leakey[252894]: {}
Jan 31 06:29:13 compute-0 systemd[1]: libpod-e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152.scope: Deactivated successfully.
Jan 31 06:29:13 compute-0 podman[252877]: 2026-01-31 06:29:13.176920486 +0000 UTC m=+0.867153736 container died e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 06:29:13 compute-0 systemd[1]: libpod-e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152.scope: Consumed 1.048s CPU time.
Jan 31 06:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3574322739d3f7f8ec63af05c875e765f0ffd1f29a77c50cf2c6debb3ac0bcd0-merged.mount: Deactivated successfully.
Jan 31 06:29:13 compute-0 podman[252877]: 2026-01-31 06:29:13.21750001 +0000 UTC m=+0.907733270 container remove e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:29:13 compute-0 systemd[1]: libpod-conmon-e5beaaefb9e437005c000c08fc11f85b6b87d1d3b49af021adf9135f69210152.scope: Deactivated successfully.
Jan 31 06:29:13 compute-0 sudo[252798]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:29:13 compute-0 ceph-mon[75251]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:29:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:13 compute-0 sudo[252991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:29:13 compute-0 sudo[252991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:29:13 compute-0 sudo[252991]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.384568) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953384634, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1433, "num_deletes": 511, "total_data_size": 1757471, "memory_usage": 1786304, "flush_reason": "Manual Compaction"}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953394520, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1517453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23151, "largest_seqno": 24583, "table_properties": {"data_size": 1511478, "index_size": 2731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16222, "raw_average_key_size": 19, "raw_value_size": 1497184, "raw_average_value_size": 1757, "num_data_blocks": 123, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840847, "oldest_key_time": 1769840847, "file_creation_time": 1769840953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9985 microseconds, and 3862 cpu microseconds.
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.394562) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1517453 bytes OK
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.394579) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.397146) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.397160) EVENT_LOG_v1 {"time_micros": 1769840953397156, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.397179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1750013, prev total WAL file size 1750013, number of live WAL files 2.
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.397652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1481KB)], [53(9398KB)]
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953397708, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11141276, "oldest_snapshot_seqno": -1}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4646 keys, 7871562 bytes, temperature: kUnknown
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953449757, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7871562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7840030, "index_size": 18814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 115998, "raw_average_key_size": 24, "raw_value_size": 7755489, "raw_average_value_size": 1669, "num_data_blocks": 783, "num_entries": 4646, "num_filter_entries": 4646, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769840953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.449979) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7871562 bytes
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.451712) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.7 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.2 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(12.5) write-amplify(5.2) OK, records in: 5670, records dropped: 1024 output_compression: NoCompression
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.451729) EVENT_LOG_v1 {"time_micros": 1769840953451721, "job": 28, "event": "compaction_finished", "compaction_time_micros": 52130, "compaction_time_cpu_micros": 20651, "output_level": 6, "num_output_files": 1, "total_output_size": 7871562, "num_input_records": 5670, "num_output_records": 4646, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953451941, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769840953452880, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.397588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.452976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.452981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.452983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.452984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:13 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:29:13.452986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:29:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:15 compute-0 ceph-mon[75251]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:17 compute-0 ceph-mon[75251]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:19 compute-0 ceph-mon[75251]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:21 compute-0 ceph-mon[75251]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:22 compute-0 ceph-mon[75251]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:24 compute-0 ceph-mon[75251]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:26 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:27 compute-0 ceph-mon[75251]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:28 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:28 compute-0 ceph-mon[75251]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:30 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:30 compute-0 ceph-mon[75251]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:32 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:33 compute-0 ceph-mon[75251]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:34 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:34 compute-0 podman[253017]: 2026-01-31 06:29:34.150271419 +0000 UTC m=+0.064849388 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 06:29:34 compute-0 podman[253016]: 2026-01-31 06:29:34.182191049 +0000 UTC m=+0.103510008 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 06:29:35 compute-0 ceph-mon[75251]: pgmap v1203: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:36 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:37 compute-0 ceph-mon[75251]: pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:38 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:39 compute-0 ceph-mon[75251]: pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:40 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:41 compute-0 ceph-mon[75251]: pgmap v1206: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:42 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:42 compute-0 sshd-session[253058]: Connection closed by 45.148.10.240 port 47306
Jan 31 06:29:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:43 compute-0 ceph-mon[75251]: pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:29:44
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', '.mgr']
Jan 31 06:29:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:29:45 compute-0 ceph-mon[75251]: pgmap v1208: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:29:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:29:46 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:47 compute-0 ceph-mon[75251]: pgmap v1209: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:48 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:48 compute-0 ceph-mon[75251]: pgmap v1210: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:50 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:29:50.226 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:29:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:29:50.227 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:29:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:29:50.227 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:29:51 compute-0 ceph-mon[75251]: pgmap v1211: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:53 compute-0 ceph-mon[75251]: pgmap v1212: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:55 compute-0 ceph-mon[75251]: pgmap v1213: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658956393945913 of space, bias 1.0, pg target 0.1997686918183774 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8582738702586133e-06 of space, bias 4.0, pg target 0.002229928644310336 quantized to 16 (current 16)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:29:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:29:56 compute-0 ceph-mon[75251]: pgmap v1214: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:29:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:29:59 compute-0 ceph-mon[75251]: pgmap v1215: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:01 compute-0 ceph-mon[75251]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:03 compute-0 ceph-mon[75251]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:05 compute-0 podman[253060]: 2026-01-31 06:30:05.129048786 +0000 UTC m=+0.052062898 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:30:05 compute-0 podman[253059]: 2026-01-31 06:30:05.148826314 +0000 UTC m=+0.073671427 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 06:30:05 compute-0 ceph-mon[75251]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:07 compute-0 ceph-mon[75251]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:09 compute-0 ceph-mon[75251]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:10 compute-0 nova_compute[239679]: 2026-01-31 06:30:10.359 239684 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 105.28 sec
Jan 31 06:30:10 compute-0 ceph-mon[75251]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.488 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.489 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.489 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.489 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.490 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:30:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:30:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181279550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:30:11 compute-0 nova_compute[239679]: 2026-01-31 06:30:11.995 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:30:12 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3181279550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:30:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:12 compute-0 nova_compute[239679]: 2026-01-31 06:30:12.117 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:30:12 compute-0 nova_compute[239679]: 2026-01-31 06:30:12.118 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:30:12 compute-0 nova_compute[239679]: 2026-01-31 06:30:12.119 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:30:12 compute-0 nova_compute[239679]: 2026-01-31 06:30:12.119 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:30:12 compute-0 sshd-session[253124]: Invalid user sol from 45.148.10.240 port 46086
Jan 31 06:30:12 compute-0 sshd-session[253124]: Connection closed by invalid user sol 45.148.10.240 port 46086 [preauth]
Jan 31 06:30:13 compute-0 ceph-mon[75251]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:13 compute-0 sudo[253126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:30:13 compute-0 sudo[253126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:13 compute-0 sudo[253126]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:13 compute-0 sudo[253151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:30:13 compute-0 sudo[253151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:13 compute-0 sudo[253151]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:30:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:30:13 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:30:13 compute-0 sudo[253207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:30:13 compute-0 sudo[253207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:13 compute-0 sudo[253207]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:13 compute-0 sudo[253232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:30:13 compute-0 sudo[253232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:30:14 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:30:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.205371925 +0000 UTC m=+0.034083422 container create 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:30:14 compute-0 systemd[1]: Started libpod-conmon-3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d.scope.
Jan 31 06:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.260096077 +0000 UTC m=+0.088807584 container init 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.265137629 +0000 UTC m=+0.093849126 container start 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.268674568 +0000 UTC m=+0.097386095 container attach 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 31 06:30:14 compute-0 systemd[1]: libpod-3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d.scope: Deactivated successfully.
Jan 31 06:30:14 compute-0 priceless_bhabha[253285]: 167 167
Jan 31 06:30:14 compute-0 conmon[253285]: conmon 3411b2265e0c9a483e1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d.scope/container/memory.events
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.270226372 +0000 UTC m=+0.098937869 container died 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.18957599 +0000 UTC m=+0.018287517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6062c6952ce4f3201273d91ef9876c52c4a78217125288613b4c3ac1370927b7-merged.mount: Deactivated successfully.
Jan 31 06:30:14 compute-0 podman[253269]: 2026-01-31 06:30:14.30603585 +0000 UTC m=+0.134747347 container remove 3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 06:30:14 compute-0 systemd[1]: libpod-conmon-3411b2265e0c9a483e1b67879b454643df2726b1d2f250ce53db0e06605a050d.scope: Deactivated successfully.
Jan 31 06:30:14 compute-0 podman[253307]: 2026-01-31 06:30:14.415411822 +0000 UTC m=+0.033228887 container create 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:30:14 compute-0 systemd[1]: Started libpod-conmon-92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b.scope.
Jan 31 06:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:14 compute-0 podman[253307]: 2026-01-31 06:30:14.484665354 +0000 UTC m=+0.102482459 container init 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:30:14 compute-0 podman[253307]: 2026-01-31 06:30:14.493962316 +0000 UTC m=+0.111779401 container start 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 06:30:14 compute-0 podman[253307]: 2026-01-31 06:30:14.40042009 +0000 UTC m=+0.018237185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:14 compute-0 podman[253307]: 2026-01-31 06:30:14.510257595 +0000 UTC m=+0.128074660 container attach 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 06:30:14 compute-0 jolly_hodgkin[253323]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:30:14 compute-0 jolly_hodgkin[253323]: --> All data devices are unavailable
Jan 31 06:30:14 compute-0 systemd[1]: libpod-92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b.scope: Deactivated successfully.
Jan 31 06:30:14 compute-0 podman[253343]: 2026-01-31 06:30:14.933777889 +0000 UTC m=+0.025495179 container died 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d9327204ba579cedaf40b9c0feafe4e5491316cd2482a37718dd309d6d559f-merged.mount: Deactivated successfully.
Jan 31 06:30:14 compute-0 podman[253343]: 2026-01-31 06:30:14.973847529 +0000 UTC m=+0.065564799 container remove 92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hodgkin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:30:14 compute-0 systemd[1]: libpod-conmon-92a33d6885ec8a3edd93d985ce78de6e0b65e4d3553ec9dcb5d67f1d45026f6b.scope: Deactivated successfully.
Jan 31 06:30:15 compute-0 sudo[253232]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:15 compute-0 sudo[253358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:30:15 compute-0 ceph-mon[75251]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:15 compute-0 sudo[253358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:15 compute-0 sudo[253358]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:15 compute-0 sudo[253383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:30:15 compute-0 sudo[253383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.377780471 +0000 UTC m=+0.035140721 container create 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:30:15 compute-0 systemd[1]: Started libpod-conmon-415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6.scope.
Jan 31 06:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.435359833 +0000 UTC m=+0.092720103 container init 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.442855934 +0000 UTC m=+0.100216184 container start 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 06:30:15 compute-0 peaceful_easley[253437]: 167 167
Jan 31 06:30:15 compute-0 systemd[1]: libpod-415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6.scope: Deactivated successfully.
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.449611435 +0000 UTC m=+0.106971695 container attach 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.450363086 +0000 UTC m=+0.107723336 container died 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.360986988 +0000 UTC m=+0.018347258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-61df5d6b2d849b9bf5e704618560eec18e7279267f4c1a3c1fa0d799c5fe1263-merged.mount: Deactivated successfully.
Jan 31 06:30:15 compute-0 podman[253420]: 2026-01-31 06:30:15.483998134 +0000 UTC m=+0.141358384 container remove 415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:30:15 compute-0 systemd[1]: libpod-conmon-415b6757c770955d28e8421fac31ae06f9eb8438e0a9d2b3989629ee8ec2a9c6.scope: Deactivated successfully.
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.623994069 +0000 UTC m=+0.048923800 container create d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 06:30:15 compute-0 systemd[1]: Started libpod-conmon-d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b.scope.
Jan 31 06:30:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db2f6c1a9d3ca2c5d88b056549ec70d11bd386b18715ef475001093f1fc889a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db2f6c1a9d3ca2c5d88b056549ec70d11bd386b18715ef475001093f1fc889a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db2f6c1a9d3ca2c5d88b056549ec70d11bd386b18715ef475001093f1fc889a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db2f6c1a9d3ca2c5d88b056549ec70d11bd386b18715ef475001093f1fc889a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.6907378 +0000 UTC m=+0.115667551 container init d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.696679037 +0000 UTC m=+0.121608808 container start d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.605600271 +0000 UTC m=+0.030530082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.700983799 +0000 UTC m=+0.125913560 container attach d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]: {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     "0": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "devices": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "/dev/loop3"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             ],
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_name": "ceph_lv0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_size": "21470642176",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "name": "ceph_lv0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "tags": {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_name": "ceph",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.crush_device_class": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.encrypted": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.objectstore": "bluestore",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_id": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.vdo": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.with_tpm": "0"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             },
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "vg_name": "ceph_vg0"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         }
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     ],
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     "1": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "devices": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "/dev/loop4"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             ],
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_name": "ceph_lv1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_size": "21470642176",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "name": "ceph_lv1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "tags": {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_name": "ceph",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.crush_device_class": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.encrypted": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.objectstore": "bluestore",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_id": "1",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.vdo": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.with_tpm": "0"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             },
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "vg_name": "ceph_vg1"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         }
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     ],
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     "2": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "devices": [
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "/dev/loop5"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             ],
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_name": "ceph_lv2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_size": "21470642176",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "name": "ceph_lv2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "tags": {
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.cluster_name": "ceph",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.crush_device_class": "",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.encrypted": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.objectstore": "bluestore",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osd_id": "2",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.vdo": "0",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:                 "ceph.with_tpm": "0"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             },
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "type": "block",
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:             "vg_name": "ceph_vg2"
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:         }
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]:     ]
Jan 31 06:30:15 compute-0 hungry_chandrasekhar[253477]: }
Jan 31 06:30:15 compute-0 systemd[1]: libpod-d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b.scope: Deactivated successfully.
Jan 31 06:30:15 compute-0 podman[253460]: 2026-01-31 06:30:15.968745134 +0000 UTC m=+0.393674905 container died d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 06:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db2f6c1a9d3ca2c5d88b056549ec70d11bd386b18715ef475001093f1fc889a-merged.mount: Deactivated successfully.
Jan 31 06:30:16 compute-0 podman[253460]: 2026-01-31 06:30:16.010077809 +0000 UTC m=+0.435007580 container remove d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:30:16 compute-0 systemd[1]: libpod-conmon-d25aec2932726362e47d9d1c432afe15de3c2ee629de61aa4da3d810e35c486b.scope: Deactivated successfully.
Jan 31 06:30:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:16 compute-0 sudo[253383]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:16 compute-0 sudo[253496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:30:16 compute-0 sudo[253496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:16 compute-0 sudo[253496]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:16 compute-0 sudo[253521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:30:16 compute-0 sudo[253521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.395767757 +0000 UTC m=+0.032988010 container create 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 06:30:16 compute-0 systemd[1]: Started libpod-conmon-1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0.scope.
Jan 31 06:30:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.38201265 +0000 UTC m=+0.019232933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.483545041 +0000 UTC m=+0.120765324 container init 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.489272062 +0000 UTC m=+0.126492315 container start 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 06:30:16 compute-0 angry_murdock[253573]: 167 167
Jan 31 06:30:16 compute-0 systemd[1]: libpod-1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0.scope: Deactivated successfully.
Jan 31 06:30:16 compute-0 conmon[253573]: conmon 1e0ace688571227467bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0.scope/container/memory.events
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.628567968 +0000 UTC m=+0.265788241 container attach 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.629916816 +0000 UTC m=+0.267137069 container died 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 06:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b2554555f9e7579426b5f31ddc663953d81aa9ced94723fe330a5fa25eac5d-merged.mount: Deactivated successfully.
Jan 31 06:30:16 compute-0 podman[253557]: 2026-01-31 06:30:16.827193434 +0000 UTC m=+0.464413687 container remove 1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:30:16 compute-0 systemd[1]: libpod-conmon-1e0ace688571227467bfa588452eb5583c7cb028edfefbd29f027cad3e9f3fc0.scope: Deactivated successfully.
Jan 31 06:30:16 compute-0 podman[253597]: 2026-01-31 06:30:16.941838425 +0000 UTC m=+0.036283794 container create 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 06:30:16 compute-0 systemd[1]: Started libpod-conmon-147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162.scope.
Jan 31 06:30:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db01c630a3c3fae4c83a55c00cd777d05acc15da2a647b3c8fe928d384f6436d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db01c630a3c3fae4c83a55c00cd777d05acc15da2a647b3c8fe928d384f6436d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db01c630a3c3fae4c83a55c00cd777d05acc15da2a647b3c8fe928d384f6436d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db01c630a3c3fae4c83a55c00cd777d05acc15da2a647b3c8fe928d384f6436d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:17.011472787 +0000 UTC m=+0.105918166 container init 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:17.016723725 +0000 UTC m=+0.111169094 container start 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:17.019590976 +0000 UTC m=+0.114036345 container attach 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:16.923598781 +0000 UTC m=+0.018044160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:30:17 compute-0 ceph-mon[75251]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:17 compute-0 lvm[253690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:30:17 compute-0 lvm[253690]: VG ceph_vg0 finished
Jan 31 06:30:17 compute-0 lvm[253693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:30:17 compute-0 lvm[253693]: VG ceph_vg1 finished
Jan 31 06:30:17 compute-0 lvm[253695]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:30:17 compute-0 lvm[253695]: VG ceph_vg2 finished
Jan 31 06:30:17 compute-0 gracious_hypatia[253614]: {}
Jan 31 06:30:17 compute-0 systemd[1]: libpod-147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162.scope: Deactivated successfully.
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:17.750952805 +0000 UTC m=+0.845398174 container died 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 06:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-db01c630a3c3fae4c83a55c00cd777d05acc15da2a647b3c8fe928d384f6436d-merged.mount: Deactivated successfully.
Jan 31 06:30:17 compute-0 podman[253597]: 2026-01-31 06:30:17.810275127 +0000 UTC m=+0.904720496 container remove 147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hypatia, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:30:17 compute-0 systemd[1]: libpod-conmon-147a1f009d3c312b56ea517c14a73c335692dcee206d0c3b724df91197d1f162.scope: Deactivated successfully.
Jan 31 06:30:17 compute-0 sudo[253521]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:30:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:30:17 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:17 compute-0 sudo[253712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:30:17 compute-0 sudo[253712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:30:17 compute-0 sudo[253712]: pam_unix(sudo:session): session closed for user root
Jan 31 06:30:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:18 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:30:18 compute-0 ceph-mon[75251]: pgmap v1225: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:30:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/707629189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:30:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/707629189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:30:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2498605624' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:30:19 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2498605624' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/707629189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/707629189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2498605624' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2498605624' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:19 compute-0 nova_compute[239679]: 2026-01-31 06:30:19.917 239684 ERROR nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}
Jan 31 06:30:19 compute-0 nova_compute[239679]: 2026-01-31 06:30:19.919 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:30:19 compute-0 nova_compute[239679]: 2026-01-31 06:30:19.920 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:30:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:30:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2746380593' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:30:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2746380593' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 06:30:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/707422770' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:30:20 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14486 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:30:20 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:30:20 compute-0 ceph-mgr[75550]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 06:30:20 compute-0 ceph-mon[75251]: pgmap v1226: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2746380593' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:30:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/2746380593' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:30:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/707422770' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 06:30:21 compute-0 ceph-mon[75251]: from='client.14486 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 06:30:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.478 239684 ERROR nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] [req-9bd3d73c-940c-465b-b811-fb541218a911] Failed to retrieve resource provider tree from placement API for UUID b3aa6abb-42c7-4433-b36f-4272440bddc9. Got 503: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}.
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 11.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Error updating resources for node compute-0.ctlplane.example.com.: nova.exception.ResourceProviderRetrievalFailed: Failed to get resource provider with UUID b3aa6abb-42c7-4433-b36f-4272440bddc9
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager Traceback (most recent call last):
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10513, in _update_available_resource_for_node
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     self.rt.update_available_resource(context, nodename,
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 889, in update_available_resource
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     return f(*args, **kwargs)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 994, in _update_available_resource
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     self._update(context, cn, startup=startup)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1303, in _update
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     self._update_to_placement(context, compute_node, startup)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 49, in wrapped_f
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     return Retrying(*dargs, **dkw).call(f, *args, **kw)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 206, in call
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     return attempt.get(self._wrap_exception)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 247, in get
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     six.reraise(self.value[0], self.value[1], self.value[2])
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     raise value
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 200, in call
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1208, in _update_to_placement
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     prov_tree = self.reportclient.get_provider_tree_and_ensure_root(
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 899, in get_provider_tree_and_ensure_root
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     self._ensure_resource_provider(
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 688, in _ensure_resource_provider
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     rps_to_refresh = self.get_providers_in_tree(context, uuid)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 551, in get_providers_in_tree
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager     raise exception.ResourceProviderRetrievalFailed(uuid=uuid)
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager nova.exception.ResourceProviderRetrievalFailed: Failed to get resource provider with UUID b3aa6abb-42c7-4433-b36f-4272440bddc9
Jan 31 06:30:23 compute-0 nova_compute[239679]: 2026-01-31 06:30:23.479 239684 ERROR nova.compute.manager 
Jan 31 06:30:23 compute-0 ceph-mon[75251]: pgmap v1227: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:24 compute-0 nova_compute[239679]: 2026-01-31 06:30:24.487 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:24 compute-0 nova_compute[239679]: 2026-01-31 06:30:24.488 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:24 compute-0 nova_compute[239679]: 2026-01-31 06:30:24.587 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:24 compute-0 nova_compute[239679]: 2026-01-31 06:30:24.588 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:30:24 compute-0 nova_compute[239679]: 2026-01-31 06:30:24.588 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:30:24 compute-0 ceph-mon[75251]: pgmap v1228: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.251 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.251 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.251 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:30:25 compute-0 nova_compute[239679]: 2026-01-31 06:30:25.252 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:30:26 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:27 compute-0 ceph-mon[75251]: pgmap v1229: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:27 compute-0 nova_compute[239679]: 2026-01-31 06:30:27.465 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:30:27 compute-0 nova_compute[239679]: 2026-01-31 06:30:27.465 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:30:27 compute-0 nova_compute[239679]: 2026-01-31 06:30:27.465 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:30:27 compute-0 nova_compute[239679]: 2026-01-31 06:30:27.465 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:30:27 compute-0 nova_compute[239679]: 2026-01-31 06:30:27.466 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:30:27 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:30:27 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552549885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:30:28 compute-0 nova_compute[239679]: 2026-01-31 06:30:28.011 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:30:28 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:28 compute-0 nova_compute[239679]: 2026-01-31 06:30:28.134 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:30:28 compute-0 nova_compute[239679]: 2026-01-31 06:30:28.135 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5097MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:30:28 compute-0 nova_compute[239679]: 2026-01-31 06:30:28.135 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:30:28 compute-0 nova_compute[239679]: 2026-01-31 06:30:28.135 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:30:28 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1552549885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:30:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:29 compute-0 ceph-mon[75251]: pgmap v1230: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:30 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:30 compute-0 ceph-mon[75251]: pgmap v1231: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:31 compute-0 nova_compute[239679]: 2026-01-31 06:30:31.922 239684 ERROR nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}
Jan 31 06:30:31 compute-0 nova_compute[239679]: 2026-01-31 06:30:31.922 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:30:31 compute-0 nova_compute[239679]: 2026-01-31 06:30:31.923 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:30:32 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:33 compute-0 ceph-mon[75251]: pgmap v1232: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:34 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:34 compute-0 ceph-mon[75251]: pgmap v1233: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] [req-ee1ae86b-00ac-40b6-95ec-75dce1c6bab6] Failed to retrieve resource provider tree from placement API for UUID b3aa6abb-42c7-4433-b36f-4272440bddc9. Got 503: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}.
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Error updating resources for node compute-0.ctlplane.example.com.: nova.exception.ResourceProviderRetrievalFailed: Failed to get resource provider with UUID b3aa6abb-42c7-4433-b36f-4272440bddc9
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager Traceback (most recent call last):
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10513, in _update_available_resource_for_node
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     self.rt.update_available_resource(context, nodename,
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 889, in update_available_resource
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     self._update_available_resource(context, resources, startup=startup)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     return f(*args, **kwargs)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 994, in _update_available_resource
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     self._update(context, cn, startup=startup)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1303, in _update
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     self._update_to_placement(context, compute_node, startup)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 49, in wrapped_f
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     return Retrying(*dargs, **dkw).call(f, *args, **kw)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 206, in call
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     return attempt.get(self._wrap_exception)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 247, in get
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     six.reraise(self.value[0], self.value[1], self.value[2])
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     raise value
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/retrying.py", line 200, in call
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1208, in _update_to_placement
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     prov_tree = self.reportclient.get_provider_tree_and_ensure_root(
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 899, in get_provider_tree_and_ensure_root
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     self._ensure_resource_provider(
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 688, in _ensure_resource_provider
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     rps_to_refresh = self.get_providers_in_tree(context, uuid)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager   File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 551, in get_providers_in_tree
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager     raise exception.ResourceProviderRetrievalFailed(uuid=uuid)
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager nova.exception.ResourceProviderRetrievalFailed: Failed to get resource provider with UUID b3aa6abb-42c7-4433-b36f-4272440bddc9
Jan 31 06:30:35 compute-0 nova_compute[239679]: 2026-01-31 06:30:35.466 239684 ERROR nova.compute.manager 
Jan 31 06:30:36 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:36 compute-0 podman[253760]: 2026-01-31 06:30:36.127141124 +0000 UTC m=+0.042137219 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 06:30:36 compute-0 podman[253759]: 2026-01-31 06:30:36.148721892 +0000 UTC m=+0.064490228 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 06:30:37 compute-0 ceph-mon[75251]: pgmap v1234: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:38 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:39 compute-0 ceph-mon[75251]: pgmap v1235: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:40 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:41 compute-0 ceph-mon[75251]: pgmap v1236: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:42 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:43 compute-0 ceph-mon[75251]: pgmap v1237: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:30:44
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'images', 'volumes', 'default.rgw.log']
Jan 31 06:30:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:30:44 compute-0 ceph-mon[75251]: pgmap v1238: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:30:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:30:46 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:46 compute-0 ceph-mon[75251]: pgmap v1239: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:48 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:49 compute-0 ceph-mon[75251]: pgmap v1240: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:50 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:30:50.227 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:30:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:30:50.228 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:30:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:30:50.228 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:30:51 compute-0 ceph-mon[75251]: pgmap v1241: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:53 compute-0 ceph-mon[75251]: pgmap v1242: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:54 compute-0 ceph-mon[75251]: pgmap v1243: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658956393945913 of space, bias 1.0, pg target 0.1997686918183774 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8582738702586133e-06 of space, bias 4.0, pg target 0.002229928644310336 quantized to 16 (current 16)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:30:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:57 compute-0 ceph-mon[75251]: pgmap v1244: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:30:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:30:59 compute-0 ceph-mon[75251]: pgmap v1245: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:01 compute-0 ceph-mon[75251]: pgmap v1246: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:03 compute-0 ceph-mon[75251]: pgmap v1247: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:05 compute-0 ceph-mon[75251]: pgmap v1248: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:06 compute-0 ceph-mon[75251]: pgmap v1249: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:07 compute-0 podman[253803]: 2026-01-31 06:31:07.156892534 +0000 UTC m=+0.073628106 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 06:31:07 compute-0 podman[253804]: 2026-01-31 06:31:07.157525862 +0000 UTC m=+0.075791907 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 06:31:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.349584) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068349689, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1140, "num_deletes": 251, "total_data_size": 1710166, "memory_usage": 1732944, "flush_reason": "Manual Compaction"}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068395694, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1694030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24584, "largest_seqno": 25723, "table_properties": {"data_size": 1688455, "index_size": 2970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11647, "raw_average_key_size": 19, "raw_value_size": 1677408, "raw_average_value_size": 2843, "num_data_blocks": 133, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769840954, "oldest_key_time": 1769840954, "file_creation_time": 1769841068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 46144 microseconds, and 3414 cpu microseconds.
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.395739) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1694030 bytes OK
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.395756) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.494622) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.494654) EVENT_LOG_v1 {"time_micros": 1769841068494645, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.494680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1704935, prev total WAL file size 1704935, number of live WAL files 2.
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.495222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1654KB)], [56(7687KB)]
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068495254, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9565592, "oldest_snapshot_seqno": -1}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4722 keys, 7799381 bytes, temperature: kUnknown
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068644829, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7799381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7767351, "index_size": 19120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118218, "raw_average_key_size": 25, "raw_value_size": 7681385, "raw_average_value_size": 1626, "num_data_blocks": 790, "num_entries": 4722, "num_filter_entries": 4722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769838804, "oldest_key_time": 0, "file_creation_time": 1769841068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "bd7b07cd-b6df-4f61-a546-6834a7dc38a0", "db_session_id": "T9FROEUWTS2FQTYPCQMI", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 06:31:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.645062) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7799381 bytes
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.693679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.9 rd, 52.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 5236, records dropped: 514 output_compression: NoCompression
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.693713) EVENT_LOG_v1 {"time_micros": 1769841068693700, "job": 30, "event": "compaction_finished", "compaction_time_micros": 149672, "compaction_time_cpu_micros": 15010, "output_level": 6, "num_output_files": 1, "total_output_size": 7799381, "num_input_records": 5236, "num_output_records": 4722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068694078, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769841068694848, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.495161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.694930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.694936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.694938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.694940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:08 compute-0 ceph-mon[75251]: rocksdb: (Original Log Time 2026/01/31-06:31:08.694942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 06:31:09 compute-0 ceph-mon[75251]: pgmap v1250: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:11 compute-0 ceph-mon[75251]: pgmap v1251: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:12 compute-0 ceph-mon[75251]: pgmap v1252: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:15 compute-0 ceph-mon[75251]: pgmap v1253: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:17 compute-0 ceph-mon[75251]: pgmap v1254: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:17 compute-0 sudo[253846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:31:17 compute-0 sudo[253846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:17 compute-0 sudo[253846]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:18 compute-0 sudo[253871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:31:18 compute-0 sudo[253871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:18 compute-0 sudo[253871]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:31:18 compute-0 sudo[253927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:31:18 compute-0 sudo[253927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:18 compute-0 sudo[253927]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:18 compute-0 sudo[253952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:31:18 compute-0 sudo[253952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:18 compute-0 podman[253989]: 2026-01-31 06:31:18.790414523 +0000 UTC m=+0.040307757 container create 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 06:31:18 compute-0 systemd[1]: Started libpod-conmon-55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2.scope.
Jan 31 06:31:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:18 compute-0 podman[253989]: 2026-01-31 06:31:18.766499829 +0000 UTC m=+0.016393063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:19 compute-0 podman[253989]: 2026-01-31 06:31:19.096678343 +0000 UTC m=+0.346571607 container init 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:31:19 compute-0 podman[253989]: 2026-01-31 06:31:19.104802642 +0000 UTC m=+0.354695876 container start 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:31:19 compute-0 nostalgic_turing[254005]: 167 167
Jan 31 06:31:19 compute-0 systemd[1]: libpod-55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2.scope: Deactivated successfully.
Jan 31 06:31:19 compute-0 podman[253989]: 2026-01-31 06:31:19.232557482 +0000 UTC m=+0.482450746 container attach 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 06:31:19 compute-0 podman[253989]: 2026-01-31 06:31:19.233248012 +0000 UTC m=+0.483141246 container died 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:31:19 compute-0 ceph-mon[75251]: pgmap v1255: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:31:19 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-084a1fa4d21a07bbe15c201149f273b62cc2f29086be621644b438a3b39d2de3-merged.mount: Deactivated successfully.
Jan 31 06:31:19 compute-0 podman[253989]: 2026-01-31 06:31:19.292352627 +0000 UTC m=+0.542245861 container remove 55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 31 06:31:19 compute-0 systemd[1]: libpod-conmon-55da60ee7b6ee542ef3939382ca6684f727921a5a3b4a3f4502de3e0de47cfd2.scope: Deactivated successfully.
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.407229525 +0000 UTC m=+0.040230095 container create 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 06:31:19 compute-0 systemd[1]: Started libpod-conmon-9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871.scope.
Jan 31 06:31:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.389275889 +0000 UTC m=+0.022276499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.489887264 +0000 UTC m=+0.122888034 container init 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.500629677 +0000 UTC m=+0.133630247 container start 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.504536417 +0000 UTC m=+0.137537017 container attach 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 06:31:19 compute-0 bold_feynman[254047]: --> passed data devices: 0 physical, 3 LVM
Jan 31 06:31:19 compute-0 bold_feynman[254047]: --> All data devices are unavailable
Jan 31 06:31:19 compute-0 systemd[1]: libpod-9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871.scope: Deactivated successfully.
Jan 31 06:31:19 compute-0 podman[254031]: 2026-01-31 06:31:19.937330732 +0000 UTC m=+0.570331302 container died 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 06:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bdfa162c2adb6eb7a9cb82b834b183c063bb055d2398db143aa1254d3b67d3-merged.mount: Deactivated successfully.
Jan 31 06:31:20 compute-0 podman[254031]: 2026-01-31 06:31:20.040381926 +0000 UTC m=+0.673382516 container remove 9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 06:31:20 compute-0 systemd[1]: libpod-conmon-9f3a182f7b85643c1918ffb42512c967d58848d85b59de322d5b512941726871.scope: Deactivated successfully.
Jan 31 06:31:20 compute-0 sudo[253952]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:20 compute-0 sudo[254078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:31:20 compute-0 sudo[254078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:20 compute-0 sudo[254078]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:20 compute-0 sudo[254103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- lvm list --format json
Jan 31 06:31:20 compute-0 sudo[254103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.444582496 +0000 UTC m=+0.023355599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.549792551 +0000 UTC m=+0.128565604 container create 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:31:20 compute-0 systemd[1]: Started libpod-conmon-83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a.scope.
Jan 31 06:31:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.691323189 +0000 UTC m=+0.270096272 container init 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.69774893 +0000 UTC m=+0.276521983 container start 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.700754765 +0000 UTC m=+0.279527838 container attach 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:31:20 compute-0 fervent_einstein[254157]: 167 167
Jan 31 06:31:20 compute-0 systemd[1]: libpod-83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a.scope: Deactivated successfully.
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.704108879 +0000 UTC m=+0.282881942 container died 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ca21633056c2e723a6858ad70ba49d52448894c0cb0a3a2d17eb401f057446-merged.mount: Deactivated successfully.
Jan 31 06:31:20 compute-0 podman[254140]: 2026-01-31 06:31:20.732822108 +0000 UTC m=+0.311595171 container remove 83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 06:31:20 compute-0 systemd[1]: libpod-conmon-83e1aade38d8c8c553379506eb78dee657f7ebebc9748cedefefed19b90f460a.scope: Deactivated successfully.
Jan 31 06:31:20 compute-0 podman[254180]: 2026-01-31 06:31:20.84359905 +0000 UTC m=+0.034951726 container create 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 06:31:20 compute-0 systemd[1]: Started libpod-conmon-74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69.scope.
Jan 31 06:31:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf9b36a0b39df15a4620be42e9032e71ed0310eb0de2d62b7d8b580d5974373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf9b36a0b39df15a4620be42e9032e71ed0310eb0de2d62b7d8b580d5974373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf9b36a0b39df15a4620be42e9032e71ed0310eb0de2d62b7d8b580d5974373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf9b36a0b39df15a4620be42e9032e71ed0310eb0de2d62b7d8b580d5974373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:20 compute-0 podman[254180]: 2026-01-31 06:31:20.898627941 +0000 UTC m=+0.089980637 container init 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 31 06:31:20 compute-0 podman[254180]: 2026-01-31 06:31:20.90249586 +0000 UTC m=+0.093848536 container start 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:31:20 compute-0 podman[254180]: 2026-01-31 06:31:20.91245433 +0000 UTC m=+0.103807096 container attach 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 06:31:20 compute-0 podman[254180]: 2026-01-31 06:31:20.827404224 +0000 UTC m=+0.018756930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:21 compute-0 tender_villani[254197]: {
Jan 31 06:31:21 compute-0 tender_villani[254197]:     "0": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:         {
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "devices": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "/dev/loop3"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             ],
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_name": "ceph_lv0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_size": "21470642176",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=fccfda9b-7473-4dd5-8d91-930b3f0aef0b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "name": "ceph_lv0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "tags": {
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_uuid": "4IAOF3-Qgj8-qo4k-KrAG-LEN0-fXs2-rN2yIu",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_name": "ceph",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.crush_device_class": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.encrypted": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.objectstore": "bluestore",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_fsid": "fccfda9b-7473-4dd5-8d91-930b3f0aef0b",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_id": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.vdo": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.with_tpm": "0"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             },
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "vg_name": "ceph_vg0"
Jan 31 06:31:21 compute-0 tender_villani[254197]:         }
Jan 31 06:31:21 compute-0 tender_villani[254197]:     ],
Jan 31 06:31:21 compute-0 tender_villani[254197]:     "1": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:         {
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "devices": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "/dev/loop4"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             ],
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_name": "ceph_lv1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_size": "21470642176",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=73952b02-2c42-4313-9a4d-6daccff98410,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "name": "ceph_lv1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "tags": {
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_uuid": "V2mtic-Vi1l-Jtcd-ItL6-LzeK-Ko9H-Xl1DYl",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_name": "ceph",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.crush_device_class": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.encrypted": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.objectstore": "bluestore",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_fsid": "73952b02-2c42-4313-9a4d-6daccff98410",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_id": "1",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.vdo": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.with_tpm": "0"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             },
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "vg_name": "ceph_vg1"
Jan 31 06:31:21 compute-0 tender_villani[254197]:         }
Jan 31 06:31:21 compute-0 tender_villani[254197]:     ],
Jan 31 06:31:21 compute-0 tender_villani[254197]:     "2": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:         {
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "devices": [
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "/dev/loop5"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             ],
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_name": "ceph_lv2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_size": "21470642176",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=797ee2fc-ca49-5eee-87c0-542bb035a7d7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b9c925ba-bcb8-4095-8064-de1a4f48f42c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "lv_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "name": "ceph_lv2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "tags": {
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.block_uuid": "tQ0Bxd-RPY3-T01r-Rol6-Zh8b-cGus-XaDFTz",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_fsid": "797ee2fc-ca49-5eee-87c0-542bb035a7d7",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.cluster_name": "ceph",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.crush_device_class": "",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.encrypted": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.objectstore": "bluestore",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_fsid": "b9c925ba-bcb8-4095-8064-de1a4f48f42c",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osd_id": "2",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.vdo": "0",
Jan 31 06:31:21 compute-0 tender_villani[254197]:                 "ceph.with_tpm": "0"
Jan 31 06:31:21 compute-0 tender_villani[254197]:             },
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "type": "block",
Jan 31 06:31:21 compute-0 tender_villani[254197]:             "vg_name": "ceph_vg2"
Jan 31 06:31:21 compute-0 tender_villani[254197]:         }
Jan 31 06:31:21 compute-0 tender_villani[254197]:     ]
Jan 31 06:31:21 compute-0 tender_villani[254197]: }
Jan 31 06:31:21 compute-0 systemd[1]: libpod-74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69.scope: Deactivated successfully.
Jan 31 06:31:21 compute-0 podman[254180]: 2026-01-31 06:31:21.180775201 +0000 UTC m=+0.372127887 container died 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 06:31:21 compute-0 ceph-mon[75251]: pgmap v1256: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcf9b36a0b39df15a4620be42e9032e71ed0310eb0de2d62b7d8b580d5974373-merged.mount: Deactivated successfully.
Jan 31 06:31:21 compute-0 podman[254180]: 2026-01-31 06:31:21.65269069 +0000 UTC m=+0.844043366 container remove 74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 06:31:21 compute-0 systemd[1]: libpod-conmon-74415d0a0df6f5c91a760b154f2b4ceaf9fcdb3dd6d8e204ae7bc58e0ae43a69.scope: Deactivated successfully.
Jan 31 06:31:21 compute-0 sudo[254103]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:21 compute-0 sudo[254219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:31:21 compute-0 sudo[254219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:21 compute-0 sudo[254219]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:21 compute-0 sudo[254244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 -- raw list --format json
Jan 31 06:31:21 compute-0 sudo[254244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.029473307 +0000 UTC m=+0.034470362 container create c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 06:31:22 compute-0 systemd[1]: Started libpod-conmon-c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24.scope.
Jan 31 06:31:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.091643269 +0000 UTC m=+0.096640344 container init c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.098198073 +0000 UTC m=+0.103195128 container start c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 06:31:22 compute-0 brave_hopper[254298]: 167 167
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.10160691 +0000 UTC m=+0.106603985 container attach c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 06:31:22 compute-0 systemd[1]: libpod-c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24.scope: Deactivated successfully.
Jan 31 06:31:22 compute-0 conmon[254298]: conmon c5fe0f22e6e54f3c0e13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24.scope/container/memory.events
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.10373496 +0000 UTC m=+0.108732015 container died c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.013004103 +0000 UTC m=+0.018001178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e683e91ec8b06cf03767bfcde60f06dd86d7ac0060f91c7e62125680c5caa4f-merged.mount: Deactivated successfully.
Jan 31 06:31:22 compute-0 podman[254281]: 2026-01-31 06:31:22.138013745 +0000 UTC m=+0.143010800 container remove c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 06:31:22 compute-0 systemd[1]: libpod-conmon-c5fe0f22e6e54f3c0e130704e01c07db5f3afc67ef5385289f5077a57fb7dc24.scope: Deactivated successfully.
Jan 31 06:31:22 compute-0 podman[254322]: 2026-01-31 06:31:22.284309008 +0000 UTC m=+0.046552583 container create 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 06:31:22 compute-0 systemd[1]: Started libpod-conmon-4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601.scope.
Jan 31 06:31:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 06:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3c55a0f944175e3d630d83d7cc1c0c9cd92c6627c3e3693667b07339568692/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3c55a0f944175e3d630d83d7cc1c0c9cd92c6627c3e3693667b07339568692/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3c55a0f944175e3d630d83d7cc1c0c9cd92c6627c3e3693667b07339568692/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d3c55a0f944175e3d630d83d7cc1c0c9cd92c6627c3e3693667b07339568692/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 06:31:22 compute-0 podman[254322]: 2026-01-31 06:31:22.259700464 +0000 UTC m=+0.021944119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:31:22 compute-0 podman[254322]: 2026-01-31 06:31:22.371926726 +0000 UTC m=+0.134170301 container init 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 06:31:22 compute-0 podman[254322]: 2026-01-31 06:31:22.376306949 +0000 UTC m=+0.138550514 container start 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 06:31:22 compute-0 podman[254322]: 2026-01-31 06:31:22.379081978 +0000 UTC m=+0.141325543 container attach 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 06:31:22 compute-0 lvm[254417]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:31:22 compute-0 lvm[254417]: VG ceph_vg1 finished
Jan 31 06:31:22 compute-0 lvm[254416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:31:22 compute-0 lvm[254416]: VG ceph_vg0 finished
Jan 31 06:31:22 compute-0 lvm[254419]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:31:22 compute-0 lvm[254419]: VG ceph_vg2 finished
Jan 31 06:31:23 compute-0 pensive_cerf[254338]: {}
Jan 31 06:31:23 compute-0 systemd[1]: libpod-4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601.scope: Deactivated successfully.
Jan 31 06:31:23 compute-0 podman[254322]: 2026-01-31 06:31:23.040744572 +0000 UTC m=+0.802988137 container died 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 06:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3c55a0f944175e3d630d83d7cc1c0c9cd92c6627c3e3693667b07339568692-merged.mount: Deactivated successfully.
Jan 31 06:31:23 compute-0 podman[254322]: 2026-01-31 06:31:23.074226316 +0000 UTC m=+0.836469881 container remove 4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 06:31:23 compute-0 systemd[1]: libpod-conmon-4d515c4b79f34769650ca3c7bf4014dacb397b1563669dc8cebf30495ff35601.scope: Deactivated successfully.
Jan 31 06:31:23 compute-0 sudo[254244]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 06:31:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 06:31:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:23 compute-0 sudo[254435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 06:31:23 compute-0 sudo[254435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:31:23 compute-0 sudo[254435]: pam_unix(sudo:session): session closed for user root
Jan 31 06:31:23 compute-0 ceph-mon[75251]: pgmap v1257: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:31:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:24 compute-0 ceph-mon[75251]: pgmap v1258: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:26 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:27 compute-0 ceph-mon[75251]: pgmap v1259: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:28 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:28 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:29 compute-0 ceph-mon[75251]: pgmap v1260: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 06:31:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/67383515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:31:29 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 06:31:29 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/67383515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:31:30 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:30 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/67383515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 06:31:30 compute-0 ceph-mon[75251]: from='client.? 192.168.122.10:0/67383515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 06:31:31 compute-0 ceph-mon[75251]: pgmap v1261: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:32 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:33 compute-0 ceph-mon[75251]: pgmap v1262: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:33 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:34 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:35 compute-0 ceph-mon[75251]: pgmap v1263: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.469 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.470 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.640 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.640 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.640 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.667 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.667 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.667 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.667 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.668 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.668 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.668 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.668 239684 DEBUG nova.compute.manager [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.668 239684 DEBUG oslo_service.periodic_task [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.812 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.813 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.813 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.813 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 06:31:35 compute-0 nova_compute[239679]: 2026-01-31 06:31:35.813 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:31:36 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:36 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:31:36 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713459199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.301 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.416 239684 WARNING nova.virt.libvirt.driver [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.417 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5117MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.417 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.417 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:31:36 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3713459199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.870 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 06:31:36 compute-0 nova_compute[239679]: 2026-01-31 06:31:36.870 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.067 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing inventories for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.314 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating ProviderTree inventory for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.315 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Updating inventory in ProviderTree for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.333 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing aggregate associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.367 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Refreshing trait associations for resource provider b3aa6abb-42c7-4433-b36f-4272440bddc9, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SVM,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.393 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 06:31:37 compute-0 ceph-mon[75251]: pgmap v1264: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:37 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 06:31:37 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183667406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.924 239684 DEBUG oslo_concurrency.processutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.930 239684 DEBUG nova.compute.provider_tree [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed in ProviderTree for provider: b3aa6abb-42c7-4433-b36f-4272440bddc9 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.957 239684 DEBUG nova.scheduler.client.report [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Inventory has not changed for provider b3aa6abb-42c7-4433-b36f-4272440bddc9 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.959 239684 DEBUG nova.compute.resource_tracker [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 06:31:37 compute-0 nova_compute[239679]: 2026-01-31 06:31:37.959 239684 DEBUG oslo_concurrency.lockutils [None req-6c249cbe-bb7f-4f34-8b61-dc7edf4f362c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:31:38 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:38 compute-0 podman[254504]: 2026-01-31 06:31:38.158416081 +0000 UTC m=+0.077399993 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 06:31:38 compute-0 podman[254505]: 2026-01-31 06:31:38.164761053 +0000 UTC m=+0.077286250 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 06:31:38 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:38 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/183667406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 06:31:39 compute-0 ceph-mon[75251]: pgmap v1265: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:40 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:40 compute-0 ceph-mon[75251]: pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:42 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:43 compute-0 ceph-mon[75251]: pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:43 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Optimize plan auto_2026-01-31_06:31:44
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: [balancer INFO root] do_upmap
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.rgw.root', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.meta']
Jan 31 06:31:44 compute-0 ceph-mgr[75550]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 06:31:44 compute-0 ceph-mon[75251]: pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:31:45 compute-0 ceph-mgr[75550]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 06:31:46 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:47 compute-0 ceph-mon[75251]: pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:48 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:48 compute-0 ceph-mon[75251]: pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:48 compute-0 sshd-session[254549]: Accepted publickey for zuul from 192.168.122.10 port 44880 ssh2: ECDSA SHA256:fM16jI5WUL+HnEtXgqHGdnNhGrz54JA2vbK2lcUEPRM
Jan 31 06:31:48 compute-0 systemd-logind[797]: New session 51 of user zuul.
Jan 31 06:31:48 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 31 06:31:48 compute-0 sshd-session[254549]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:31:48 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:48 compute-0 sudo[254553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 06:31:48 compute-0 sudo[254553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:31:50 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:31:50.228 155105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 06:31:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:31:50.229 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 06:31:50 compute-0 ovn_metadata_agent[155100]: 2026-01-31 06:31:50.229 155105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 06:31:51 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:31:51 compute-0 ceph-mon[75251]: pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:52 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14500 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:31:52 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:52 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 06:31:52 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805269021' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 06:31:52 compute-0 ceph-mon[75251]: from='client.14498 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:31:53 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:54 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:54 compute-0 ceph-mon[75251]: from='client.14500 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:31:54 compute-0 ceph-mon[75251]: pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:54 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/805269021' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 06:31:55 compute-0 ceph-mon[75251]: pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658956393945913 of space, bias 1.0, pg target 0.1997686918183774 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8582738702586133e-06 of space, bias 4.0, pg target 0.002229928644310336 quantized to 16 (current 16)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 06:31:56 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:56 compute-0 ceph-mon[75251]: pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:57 compute-0 ovs-vsctl[254879]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 06:31:58 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:31:58 compute-0 virtqemud[239978]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 06:31:58 compute-0 virtqemud[239978]: hostname: compute-0
Jan 31 06:31:58 compute-0 virtqemud[239978]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 06:31:58 compute-0 virtqemud[239978]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 06:31:58 compute-0 virtqemud[239978]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 06:31:58 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: cache status {prefix=cache status} (starting...)
Jan 31 06:31:58 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:31:58 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: client ls {prefix=client ls} (starting...)
Jan 31 06:31:58 compute-0 lvm[255220]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 06:31:58 compute-0 lvm[255220]: VG ceph_vg1 finished
Jan 31 06:31:58 compute-0 lvm[255222]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 06:31:58 compute-0 lvm[255222]: VG ceph_vg0 finished
Jan 31 06:31:59 compute-0 lvm[255239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 06:31:59 compute-0 lvm[255239]: VG ceph_vg2 finished
Jan 31 06:31:59 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14504 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:31:59 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 06:31:59 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 06:31:59 compute-0 ceph-mon[75251]: pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 06:32:00 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:00 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 06:32:00 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 06:32:00 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1130896513' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 06:32:00 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 06:32:00 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 06:32:01 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:32:01 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273579660' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:32:01 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: ops {prefix=ops} (starting...)
Jan 31 06:32:01 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:01 compute-0 ceph-mgr[75550]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 06:32:01 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: 2026-01-31T06:32:01.037+0000 7fc402b84640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 06:32:01 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: session ls {prefix=session ls} (starting...)
Jan 31 06:32:01 compute-0 ceph-mds[95670]: mds.cephfs.compute-0.olydew asok_command: status {prefix=status} (starting...)
Jan 31 06:32:02 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 06:32:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651228699' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 06:32:02 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 06:32:02 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391230086' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 06:32:02 compute-0 ceph-mon[75251]: from='client.14504 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:02 compute-0 ceph-mon[75251]: pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:02 compute-0 ceph-mon[75251]: from='client.14506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:02 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1130896513' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 06:32:03 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/594446797' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: from='client.14510 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4273579660' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1651228699' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3391230086' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 06:32:03 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239597153' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 06:32:03 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:32:04 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 06:32:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789429445' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 06:32:04 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14526 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:04 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14530 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:04 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 06:32:04 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386774123' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 06:32:04 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/594446797' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 06:32:04 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1239597153' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 06:32:04 compute-0 ceph-mon[75251]: pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:04 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2789429445' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 06:32:04 compute-0 ceph-mon[75251]: from='client.14526 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 06:32:05 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282221873' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 06:32:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 06:32:05 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744215999' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910018 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:51.928304+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 892928 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:52.928429+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:53.928575+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 884736 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:54.928752+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.983754158s of 12.117748260s, submitted: 4
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:55.928891+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:25.316313+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:25.327035+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 143)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:25.316313+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:25.327035+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912433 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:56.929053+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 876544 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:57.929203+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:58.929326+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:28.360022+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:28.370643+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 868352 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:59.929542+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 145)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:28.360022+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:28.370643+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 860160 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:00.929683+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:30.382412+0000 osd.2 (osd.2) 146 : cluster [DBG] 8.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:30.393022+0000 osd.2 (osd.2) 147 : cluster [DBG] 8.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 843776 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 147)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:30.382412+0000 osd.2 (osd.2) 146 : cluster [DBG] 8.12 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:30.393022+0000 osd.2 (osd.2) 147 : cluster [DBG] 8.12 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 919676 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:01.929875+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:31.414940+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:31.425565+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 811008 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 149)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:31.414940+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:31.425565+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:02.930177+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:32.394545+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:32.405156+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 151)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:32.394545+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:32.405156+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:03.930412+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:04.930612+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.039112091s of 10.086163521s, submitted: 10
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:05.930747+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:35.402395+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1b scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:35.412969+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1b scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 802816 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 153)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:35.402395+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1b scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:35.412969+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1b scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:06.930925+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924506 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:07.931093+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 794624 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:08.931339+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:09.931496+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:10.931618+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 786432 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:11.931737+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924506 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:12.931887+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 778240 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:13.932014+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:43.403996+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:43.414586+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 155)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:43.403996+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:43.414586+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:14.932330+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:15.932485+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 770048 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.992003441s of 11.018499374s, submitted: 4
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:16.932609+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:46.421021+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:46.431589+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929332 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 157)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:46.421021+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:46.431589+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:17.932804+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:47.378442+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:47.388727+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 745472 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 159)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:47.378442+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:47.388727+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:18.932998+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:48.345627+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:48.356178+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 161)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:48.345627+0000 osd.2 (osd.2) 160 : cluster [DBG] 8.2 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:48.356178+0000 osd.2 (osd.2) 161 : cluster [DBG] 8.2 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:19.933159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 729088 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:20.933371+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 720896 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:21.933531+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934156 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 704512 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:22.933728+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:52.418701+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:52.429284+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 696320 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 163)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:52.418701+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.d scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:52.429284+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.d scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:23.933975+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:24.934141+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 688128 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:25.934268+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:55.391155+0000 osd.2 (osd.2) 164 : cluster [DBG] 8.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:55.401704+0000 osd.2 (osd.2) 165 : cluster [DBG] 8.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.942831993s of 10.049035072s, submitted: 11
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 165)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:55.391155+0000 osd.2 (osd.2) 164 : cluster [DBG] 8.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:55.401704+0000 osd.2 (osd.2) 165 : cluster [DBG] 8.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:26.934460+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:56.399594+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T05:59:56.410171+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941397 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 679936 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 167)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:56.399594+0000 osd.2 (osd.2) 166 : cluster [DBG] 11.15 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T05:59:56.410171+0000 osd.2 (osd.2) 167 : cluster [DBG] 11.15 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:27.934727+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 671744 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:28.934849+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 671744 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:29.935000+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 671744 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:30.935199+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:00.361952+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.3 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:00.372423+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.3 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 655360 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:31.935419+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 4 last_log 171 sent 169 num 4 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:01.314871+0000 osd.2 (osd.2) 170 : cluster [DBG] 8.4 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:01.325470+0000 osd.2 (osd.2) 171 : cluster [DBG] 8.4 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946221 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 655360 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 169)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:00.361952+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.3 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:00.372423+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.3 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:32.935641+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 647168 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 171)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:01.314871+0000 osd.2 (osd.2) 170 : cluster [DBG] 8.4 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:01.325470+0000 osd.2 (osd.2) 171 : cluster [DBG] 8.4 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:33.935777+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:03.258156+0000 osd.2 (osd.2) 172 : cluster [DBG] 6.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:03.279357+0000 osd.2 (osd.2) 173 : cluster [DBG] 6.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 638976 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:34.935960+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 4 last_log 175 sent 173 num 4 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:04.289738+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:04.303936+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 173)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:03.258156+0000 osd.2 (osd.2) 172 : cluster [DBG] 6.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:03.279357+0000 osd.2 (osd.2) 173 : cluster [DBG] 6.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 638976 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:35.936171+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 175)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:04.289738+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:04.303936+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 630784 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:36.936293+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951045 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 630784 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.496338844s of 10.860325813s, submitted: 9
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:37.936409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:07.330479+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.9 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:07.344512+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.9 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 177)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:07.330479+0000 osd.2 (osd.2) 176 : cluster [DBG] 11.9 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:07.344512+0000 osd.2 (osd.2) 177 : cluster [DBG] 11.9 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 630784 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:38.936609+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:08.331262+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:08.345339+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 614400 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 179)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:08.331262+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.11 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:08.345339+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.11 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:39.937292+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:09.314383+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:09.356731+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 589824 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 181)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:09.314383+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.8 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:09.356731+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.8 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:40.937444+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:10.297315+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:10.339596+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 581632 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 183)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:10.297315+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.e scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:10.339596+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.e scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:41.937656+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960693 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 565248 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:42.937799+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 565248 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:43.938008+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:13.263944+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:13.299296+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 185)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:13.263944+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:13.299296+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:44.938197+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 532480 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:45.938324+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:46.938497+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963106 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 524288 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:47.938659+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:48.938816+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 516096 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.635191917s of 11.941458702s, submitted: 10
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:49.938986+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:19.271969+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.13 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:19.303765+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.13 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:50.939288+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 187)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:19.271969+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.13 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:19.303765+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.13 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:51.939430+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:21.231184+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:21.287575+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967932 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 189)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:21.231184+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.19 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:21.287575+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.19 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:52.939592+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:22.269555+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.6 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:22.301312+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.6 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 191)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:22.269555+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.6 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:22.301312+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.6 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:53.939842+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 507904 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:54.940035+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:24.259461+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:24.298241+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 491520 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 193)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:24.259461+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.7 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:24.298241+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.7 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:55.940195+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 491520 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:56.940312+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972754 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 483328 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:57.940470+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:27.281048+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:27.309309+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 458752 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 195)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:27.281048+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:27.309309+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:58.940668+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:28.263526+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:28.302351+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 442368 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.123243332s of 10.036557198s, submitted: 12
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 197)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:28.263526+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.f scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:28.302351+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.f scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:59.940867+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:29.308476+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.17 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  will send 2026-01-31T06:00:29.336701+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.17 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1490944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client handle_log_ack log(last 199)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:29.308476+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.17 scrub starts
Jan 31 06:32:05 compute-0 ceph-osd[88127]: log_client  logged 2026-01-31T06:00:29.336701+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.17 scrub ok
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:00.941029+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1482752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:01.941217+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1482752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:02.941350+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1482752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:03.941525+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 1474560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:04.941688+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 1474560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:05.941882+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71835648 unmapped: 1474560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:06.942055+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1466368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:07.942216+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1466368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:08.942405+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 1458176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:09.942523+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 1458176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:10.942690+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:11.942829+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:12.942964+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1449984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:13.943158+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:14.943375+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 1441792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:15.943561+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:16.943747+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 1433600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:17.943974+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:18.944169+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:19.944322+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 1425408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:20.944522+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71892992 unmapped: 1417216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:21.944657+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1409024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:22.944773+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:23.944897+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:24.945041+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1400832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:25.945160+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:26.945274+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 1392640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:27.945429+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:28.945546+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:29.945667+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:30.945775+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:31.945941+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1384448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:32.946085+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:33.946228+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 1376256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:34.946442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:35.946575+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:36.946791+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1368064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:37.946963+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:38.947127+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 1359872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:39.947262+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:40.947434+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 1351680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:41.947583+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:42.947718+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:43.947912+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1343488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:44.948084+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:45.948233+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:46.948382+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 1335296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:47.948560+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:48.948746+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1327104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:49.948916+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:50.949057+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 1318912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:51.949212+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 1310720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:52.949335+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 1310720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:53.949591+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 1310720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:54.950990+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:55.951169+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1294336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:56.951361+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:57.951502+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:58.951638+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 1286144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:59.951812+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:00.951962+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 1277952 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:01.952097+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:02.952228+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:03.952410+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1269760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:04.952597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 1261568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:05.952731+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 1261568 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:06.952894+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:07.953070+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1253376 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:08.953215+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:09.953408+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:10.953582+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 1245184 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:11.953725+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:12.953845+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:13.954018+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 1228800 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:14.954239+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 1220608 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:15.954416+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:16.954660+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 1212416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:17.954919+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:18.955178+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 1204224 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:19.955335+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:20.955476+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:21.955670+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 1196032 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:22.955836+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:23.955994+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 1187840 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:24.956159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:25.956328+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:26.956481+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1179648 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:27.956625+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:28.956840+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 1171456 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:29.957014+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1163264 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:30.957215+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:31.957374+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 1155072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:32.957552+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:33.957766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 1146880 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:34.957989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:35.958169+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:36.958319+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 1138688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:37.958460+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 1130496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:38.958606+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 1130496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:39.958778+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:40.958934+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:41.959078+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1122304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:42.959227+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 1114112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:43.959392+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 1114112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:44.959568+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:45.959702+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:46.959908+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1105920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:47.960059+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:48.960304+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1097728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:49.960496+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:50.960651+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1089536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:51.960905+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:52.961939+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1081344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:53.962141+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:54.962462+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:55.962697+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1073152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:56.962825+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:57.963178+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:58.963488+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1064960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:59.963629+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:00.963819+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1056768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:01.963957+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1048576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:02.964106+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1048576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:03.964246+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1048576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:04.964494+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:05.964640+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:06.964768+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1040384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:07.964906+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 1032192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:08.965192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 1032192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:09.965449+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:10.965594+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 1024000 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:11.965720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:12.965849+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:13.965968+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1015808 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:14.966159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:15.966314+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1007616 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:16.966443+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:17.966559+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 999424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:18.966675+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:19.966806+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:20.966924+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 991232 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:21.967061+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:22.967189+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:23.967319+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 983040 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:24.967474+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 974848 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:25.967611+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 974848 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:26.967762+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 966656 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:27.967883+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 966656 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:28.968022+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 958464 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:29.968166+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:30.968247+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 950272 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:31.968394+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:32.968515+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 942080 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:33.968654+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 933888 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:34.968794+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:35.968947+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 917504 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:36.969099+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:37.969272+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72400896 unmapped: 909312 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:38.969400+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:39.969499+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 901120 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:40.969622+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:41.969748+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:42.969892+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 892928 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:43.970072+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 884736 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:44.970289+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 876544 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:45.970443+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 876544 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:46.970583+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 868352 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:47.970710+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 868352 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:48.970852+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 868352 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:49.971001+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:50.971179+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:51.971333+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 860160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:52.971485+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:53.971660+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 851968 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:54.971879+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:55.972101+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 843776 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:56.972334+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:57.972543+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:58.972724+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72474624 unmapped: 835584 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:59.972896+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:00.973045+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:01.973211+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 827392 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:02.973368+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:03.973498+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 819200 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:04.973685+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:05.973872+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 811008 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:06.974045+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:07.974177+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:08.974354+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:09.974537+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 802816 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:10.974736+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:11.974879+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 794624 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:12.975032+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 786432 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:13.975216+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 786432 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:14.975364+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:15.975488+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 770048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:16.975658+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:17.975851+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:18.976046+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 761856 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:19.976161+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:20.977333+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 753664 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:21.977471+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:22.977608+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:23.977694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 745472 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:24.977798+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:25.977928+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:26.978140+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 737280 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:27.978246+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:28.978370+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:29.978505+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 729088 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:30.978646+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:31.978769+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:32.978924+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 720896 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:33.979060+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:34.979181+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:35.979341+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 712704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:36.979468+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 704512 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:37.979611+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 704512 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:38.979763+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 704512 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:39.979918+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:40.980037+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 696320 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:41.980199+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:42.980334+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:43.980525+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 688128 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:44.980766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:45.980926+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:46.981066+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:47.981222+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:48.981377+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 671744 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:49.981528+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:50.981667+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 663552 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:51.981838+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:52.981976+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:53.982220+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:54.982418+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:55.982590+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:56.982720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 638976 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:57.982871+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:58.983019+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 630784 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:59.983294+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:00.983378+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:01.983605+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:02.983790+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:03.983994+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 614400 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:04.984172+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:05.984281+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:06.984397+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 606208 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:07.984513+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:08.984623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:09.984732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:10.984934+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 589824 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:11.985103+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:12.985312+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:13.985540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:14.985911+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:15.986201+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:16.986358+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:17.986518+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:18.986786+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 557056 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:19.986972+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5525 writes, 24K keys, 5525 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5525 writes, 786 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5525 writes, 24K keys, 5525 commit groups, 1.0 writes per commit group, ingest: 18.88 MB, 0.03 MB/s
                                           Interval WAL: 5525 writes, 786 syncs, 7.03 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:20.987122+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 475136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:21.987273+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 475136 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:22.987442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:23.987612+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 466944 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:24.987884+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:25.989021+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 458752 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:26.989163+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:27.989433+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:28.989561+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 450560 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:29.989823+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:30.991188+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 442368 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:31.991554+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:32.991805+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:33.991931+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 434176 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:34.992188+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:35.992336+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 425984 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:36.992509+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 417792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:37.992699+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 417792 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:38.993090+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:39.993418+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:40.993552+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 409600 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:41.993688+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:42.993903+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:43.994071+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 401408 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:44.994238+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:45.994355+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:46.994475+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 393216 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:47.994637+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:48.994915+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 385024 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:49.995040+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:50.995195+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:51.995340+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 376832 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:52.995561+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:53.995690+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:54.995838+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 368640 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:55.995962+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 360448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:56.996127+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 360448 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:57.997888+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:58.999515+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 352256 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:00.000923+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:01.001518+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:02.001922+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:03.002684+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 344064 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:04.002901+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:05.003152+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 335872 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:06.003540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:07.003690+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 327680 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:08.003858+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:09.004046+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:10.004299+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 319488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:11.004505+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:12.004706+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:13.004943+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:14.005158+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 311296 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:15.005336+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:16.005469+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 303104 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:17.005629+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:18.005960+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:19.006340+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 294912 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:20.006610+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:21.006939+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:22.012784+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 286720 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:23.013022+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:24.013233+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:25.013466+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 278528 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:26.013694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:27.013898+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 270336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:28.014082+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:29.014276+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:30.014419+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:31.014531+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 262144 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 332.642272949s of 332.651367188s, submitted: 2
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:32.014921+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:33.015243+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 188416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:34.015350+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 106496 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:35.015478+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 98304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:36.015640+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 114688 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:37.015755+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 81920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:38.015959+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 98304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:39.016213+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 98304 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:40.016396+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:41.016620+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:42.016788+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 90112 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:43.016938+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 81920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:44.017146+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 81920 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:45.017372+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:46.017527+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 73728 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:47.017704+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 65536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:48.017928+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 65536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:49.018192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 65536 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:50.018429+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:51.018617+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 57344 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:52.018766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:53.018965+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:54.019188+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 49152 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:55.019388+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:56.019658+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 40960 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:57.019835+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 32768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:58.020054+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 32768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:59.020184+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 32768 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:00.020343+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 24576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:01.020511+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 24576 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:02.020689+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:03.020857+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 16384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:04.021010+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:05.021199+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:06.021344+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 0 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:07.021520+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1040384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:08.021669+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 1040384 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:09.021849+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:10.021988+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:11.022179+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1032192 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:12.022332+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:13.022474+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1024000 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:14.022606+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:15.022814+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:16.022975+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:17.023159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:18.023279+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 991232 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:19.023424+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:20.023602+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:21.023768+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:22.023955+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:23.024076+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 974848 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:24.024218+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:25.024372+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:26.024513+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:27.024628+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:28.024808+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:29.024942+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:30.025150+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:31.025326+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:32.025504+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:33.025666+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:34.025824+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:35.025958+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:36.026174+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:37.026306+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:38.026422+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:39.026561+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:40.026697+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:41.026824+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:42.026986+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:43.027128+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:44.027251+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:45.027415+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:46.027535+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:47.027654+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:48.027789+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:49.027947+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:50.028095+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:51.028317+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:52.028521+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:53.028728+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:54.028869+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:55.029013+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 942080 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:56.031453+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 942080 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:57.031586+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 942080 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:58.031748+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 942080 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:59.031924+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:00.032105+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:01.032330+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:02.032523+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:03.032698+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:04.032867+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:05.033176+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:06.033368+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:07.033483+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:08.033711+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:09.033864+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:10.034022+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:11.034181+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:12.034366+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:13.034560+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:14.034752+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:15.034941+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:16.035140+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:17.035324+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:18.035518+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:19.035690+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:20.036034+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:21.036360+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:22.036537+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:23.036678+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:24.036808+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:25.037002+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:26.037182+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:27.037348+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:28.037471+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:29.037623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:30.037768+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:31.037977+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:32.038100+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:33.039234+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:34.039341+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:35.039545+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:36.039756+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:37.039959+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:38.040093+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:39.040221+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:40.040349+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:41.040489+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:42.040647+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:43.040777+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:44.041011+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:45.041183+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:46.041293+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:47.041412+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:48.041548+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:49.041762+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:50.041881+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:51.042019+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:52.042205+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:53.042316+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:54.042466+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:55.042873+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:56.043061+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:57.043179+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:58.043304+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:59.043436+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:00.043648+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:01.043801+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:02.043949+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:03.044071+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:04.044177+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:05.044373+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:06.044525+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:07.044650+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:08.044838+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:09.044979+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:10.045096+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:11.045275+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:12.045418+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:13.045577+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:14.045731+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:15.045933+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:16.046226+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:17.046372+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:18.046503+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:19.046684+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:20.046825+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:21.046949+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:22.047146+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:23.047293+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:24.047441+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:25.047597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:26.047746+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:27.047893+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:28.048044+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:29.048217+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:30.048408+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:31.048623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:32.048770+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:33.048924+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:34.049073+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:35.049236+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:36.049445+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:37.049584+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:38.049711+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:39.049868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:40.050005+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:41.050169+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:42.050301+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:43.050436+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:44.050577+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:45.050904+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:46.051332+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:47.051493+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:48.051626+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:49.051816+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:50.052141+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:51.052401+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:52.052607+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:53.052816+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:54.053043+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:55.053261+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:56.053458+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:57.053600+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:58.054462+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:59.054630+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:00.054851+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:01.055066+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:02.055323+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:03.055567+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:04.055757+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:05.055939+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:06.056184+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:07.056387+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:08.056502+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:09.056642+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:10.056810+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:11.056967+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:12.057099+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:13.057239+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:14.057382+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:15.057603+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:16.057779+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:17.057987+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:18.058206+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:19.058434+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:20.058627+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:21.058774+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:22.058902+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:23.059107+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:24.059314+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:25.059473+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:26.059601+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:27.059720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:28.059871+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:29.060000+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:30.060164+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:31.060282+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:32.060409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:33.060538+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:34.060752+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:35.060997+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:36.061171+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:37.061300+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:38.061424+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:39.061603+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:40.061811+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:41.061969+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:42.062137+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:43.062338+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:44.062485+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:45.062623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:46.062767+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:47.062907+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:48.063056+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:49.063218+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:50.063394+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:51.063538+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:52.063670+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:53.063804+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:54.063964+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:55.064175+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:56.064366+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:57.064598+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:58.064792+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:59.064997+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:00.065192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:01.065343+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:02.065490+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:03.065660+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 770048 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:04.065883+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:05.066134+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:06.066279+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:07.066461+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:08.066763+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:09.067080+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:10.067370+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:11.067560+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:12.067787+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:13.067953+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 753664 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:14.068235+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:15.068616+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:16.068870+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:17.069057+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:18.069367+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:19.069556+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:20.069747+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:21.069904+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:22.070056+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:23.070171+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:24.070305+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:25.070487+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:26.070635+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:27.129680+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:28.129868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:29.130052+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979989 data_alloc: 218103808 data_used: 3686
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:30.130271+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:31.130568+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:32.130764+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.309387207s of 300.734405518s, submitted: 90
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: handle_auth_request added challenge on 0x5633a3e3b400
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:33.130902+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 524288 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:34.131002+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:35.131211+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:36.131385+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:37.131540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:38.131667+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:39.131809+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:40.132226+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 507904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:41.132353+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:42.132480+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:43.132590+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:44.132763+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:45.132928+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:46.133040+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:47.133253+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 491520 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:48.133426+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:49.133557+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:50.133679+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:51.133846+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:52.133994+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:53.134203+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:54.134328+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:55.134455+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:56.134617+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:57.134779+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:58.134935+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:59.135089+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:00.135256+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 483328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:01.135381+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:02.135536+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:03.135697+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:04.135812+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:05.136030+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:06.136219+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:07.136443+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:08.136647+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:09.136839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:10.137007+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:11.137183+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:12.137371+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:13.137516+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:14.137666+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:15.137886+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:16.138041+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:17.138205+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:18.138376+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:19.138550+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:20.138687+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:21.138814+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 466944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:22.138940+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:23.139066+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:24.139215+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:25.139446+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:26.139607+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:27.139722+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:28.139839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:29.139955+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 450560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:30.140059+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:31.140192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:32.140316+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:33.140476+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:34.140599+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 442368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:35.140766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:36.141031+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:37.141249+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:38.141391+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:39.141538+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 434176 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:40.141651+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 425984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:41.141882+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 425984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:42.142176+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:43.142324+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:44.142503+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:45.142691+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:46.142874+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:47.143208+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:48.143365+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:49.143492+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:50.143656+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:51.143786+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:52.143915+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:53.144069+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:54.144237+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 409600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:55.144476+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:56.144655+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:57.144857+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:58.145185+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:59.145651+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 401408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:00.146278+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:01.146657+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:02.146942+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:03.147271+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:04.147473+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:05.147805+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:06.148062+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:07.148383+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:08.148632+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:09.148756+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:10.148971+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:11.149220+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:12.149494+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:13.149695+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:14.149846+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:15.150080+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:16.150409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:17.150567+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:18.150873+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:19.151027+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:20.151184+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:21.151344+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:22.151456+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:23.151632+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:24.151797+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:25.152196+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:26.152352+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:27.152536+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:28.152741+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:29.152876+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:30.152998+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:31.153167+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:32.153290+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:33.153410+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:34.153586+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:35.153759+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:36.153905+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:37.154041+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:38.154239+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:39.154389+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:40.154506+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:41.154622+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:42.154789+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:43.154912+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:44.155083+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:45.155690+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:46.155838+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:47.155996+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:48.156194+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:49.156329+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:50.156488+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:51.156694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:52.156910+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:53.157171+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:54.157380+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:55.157581+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:56.157852+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:57.158023+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:58.158201+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:59.158338+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:00.158495+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:01.158672+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:02.158894+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:03.159042+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 393216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:04.159195+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:05.159430+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:06.159528+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:07.159725+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:08.159863+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:09.160051+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:10.160313+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:11.160489+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:12.160667+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:13.160854+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:14.160994+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 385024 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:15.161165+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:16.161306+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:17.161444+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:18.161597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:19.161741+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:20.161895+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:21.162065+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:22.162262+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 376832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:23.162443+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:24.162582+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:25.162799+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:26.162953+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:27.163103+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:28.163308+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 368640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:29.163445+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 352256 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:30.163600+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 352256 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:31.163755+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 352256 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:32.163925+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 352256 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:33.164153+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 352256 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:34.164376+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:35.164781+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:36.164998+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 344064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:37.165184+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:38.165339+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:39.165468+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:40.165634+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:41.165815+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:42.166023+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:43.166192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:44.166360+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:45.166603+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:46.166781+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:47.166920+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:48.167048+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:49.167205+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:50.167344+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:51.167469+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:52.167741+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:53.167868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 335872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:54.168056+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:55.168317+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:56.168442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:57.168589+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:58.169004+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:59.169270+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:00.169413+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:01.169558+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:02.169744+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:03.169879+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:04.170020+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:05.170223+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:06.170405+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:07.170564+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:08.170715+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:09.170894+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:10.171016+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:11.171164+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:12.171287+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:13.171409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:14.171570+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 327680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:15.171762+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:16.171897+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:17.172044+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:18.172180+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:19.172311+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:20.172452+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5753 writes, 25K keys, 5753 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5753 writes, 900 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05654b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633a05658d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:21.172629+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:22.172812+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:23.172955+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:24.173139+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:25.173314+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:26.173458+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:27.173634+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:28.173790+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:29.173955+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:30.174193+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:31.174446+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:32.174618+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:33.174766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:34.174947+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:35.175217+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:36.175443+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:37.175602+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:38.175722+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:39.175871+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:40.176010+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:41.176141+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:42.176271+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:43.176476+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:44.176684+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:45.176914+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:46.177092+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:47.177342+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:48.177486+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:49.177614+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:50.177782+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:51.177974+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:52.178203+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:53.178342+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:54.178472+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:55.178646+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:56.178827+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:57.178966+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:58.179149+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 303104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:59.179294+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:00.179487+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:01.179681+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:02.179845+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:03.180033+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:04.180205+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:05.180432+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:06.180577+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:07.180711+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:08.180844+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:09.181017+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:10.181163+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:11.181282+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:12.181471+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:13.181591+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 294912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:14.181751+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:15.181920+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:16.182082+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:17.182251+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:18.182392+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:19.182516+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:20.182689+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:21.182863+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:22.182983+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:23.183130+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:24.183268+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:25.183434+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:26.183558+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:27.183760+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:28.183893+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:29.184086+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 286720 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:30.184309+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:31.184486+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.724273682s of 299.488861084s, submitted: 24
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:32.184618+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:33.184760+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 270336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:34.184912+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 311296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [0,0,0,1])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:35.185285+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:36.185454+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 278528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:37.185608+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 229376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:38.185741+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:39.185875+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:40.185999+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:41.186129+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:42.186243+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:43.186448+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:44.186637+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 1220608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:45.186829+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:46.186984+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:47.187181+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:48.187333+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:49.187475+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:50.187759+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:51.187913+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:52.188085+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 1204224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:53.188313+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1196032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:54.188474+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 1196032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:55.188737+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:56.188970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:57.189092+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:58.189243+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:59.189422+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:00.189576+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:01.189770+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:02.189938+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:03.190096+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:04.190377+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:05.190612+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:06.190775+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:07.190951+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:08.191204+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:09.191352+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 1187840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:10.191498+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:11.191695+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:12.191899+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:13.192076+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:14.192258+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:15.193217+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 1179648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:16.193463+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 1155072 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:17.193784+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:18.193989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:19.194628+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:20.195212+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:21.195522+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:22.195701+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:23.196164+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:24.196422+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:25.196579+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000037s
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:26.196774+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:27.197009+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:28.197157+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:29.197329+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:30.197487+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:31.197691+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:32.197837+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:33.197991+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:34.198441+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:35.199338+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:36.199467+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:37.199609+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:38.199737+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:39.199945+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:40.200160+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:41.200318+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:42.200454+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:43.200665+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 1146880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:44.200802+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:45.201024+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:46.201160+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:47.201295+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:48.201453+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:49.201589+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:50.201764+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:51.201943+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:52.202095+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:53.202284+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:54.202442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:55.202652+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:56.202777+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:57.202889+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:58.203030+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:59.203158+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:00.203320+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:01.203448+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:02.203640+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:03.203837+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:04.203970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:05.204146+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:06.204298+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:07.204433+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:08.204605+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:09.204737+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:10.204877+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:11.205060+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:12.205205+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:13.205420+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1138688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:14.205549+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:15.205713+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:16.205815+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:17.205965+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:18.206159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:19.206302+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:20.206492+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:21.206616+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:22.206759+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:23.206915+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 1130496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:24.207033+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:25.207197+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:26.207345+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:27.207599+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:28.207719+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:29.207870+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:30.208009+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:31.208155+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 1122304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:32.208284+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:33.208390+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:34.208528+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:35.208723+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:36.208885+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:37.208997+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:38.209163+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:39.209291+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:40.209442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:41.209565+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:42.209707+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:43.209825+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:44.210015+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:45.210208+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:46.210351+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:47.210546+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:48.210673+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:49.210841+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:50.210970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 1105920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:51.211174+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1089536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:52.211347+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1089536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:53.211520+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 1089536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:54.211717+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:55.211975+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:56.212210+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:57.212356+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:58.212483+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:59.212654+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:00.212867+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:01.213075+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 1081344 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:02.213270+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:03.213441+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:04.213657+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:05.213919+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:06.214056+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:07.214239+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:08.214461+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:09.214643+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:10.214820+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 1073152 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:11.214957+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1056768 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:12.215093+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1056768 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:13.215250+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1056768 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:14.215404+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76447744 unmapped: 1056768 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:15.215600+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:16.215775+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:17.215941+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:18.216081+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:19.216256+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:20.216394+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:21.216522+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:22.216697+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:23.216822+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:24.216938+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:25.217097+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:26.217398+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:27.217805+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:28.217976+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76455936 unmapped: 1048576 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:29.218271+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:30.218597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 1040384 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:31.218757+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:32.218907+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:33.219083+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:34.219267+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:35.219447+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:36.219605+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:37.219790+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:38.219929+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:39.220054+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:40.220181+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:41.220339+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:42.220504+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:43.220671+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:44.220839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:45.221223+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:46.221430+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:47.221643+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:48.221778+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:49.221957+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:50.222087+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:51.222237+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:52.222419+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:53.222550+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:54.222704+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:55.222865+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76480512 unmapped: 1024000 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:56.223090+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:57.223562+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:58.223922+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76488704 unmapped: 1015808 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:59.224159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:00.224296+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:01.224787+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:02.224997+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:03.225308+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:04.225597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:05.225862+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:06.226030+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:07.226170+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:08.226415+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:09.226550+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:10.226834+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:11.226987+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:12.227258+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:13.227430+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76496896 unmapped: 1007616 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:14.227560+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 999424 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:15.227757+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:16.227918+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:17.228091+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:18.228265+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:19.228403+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:20.228547+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:21.228675+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:22.228824+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:23.228944+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:24.229082+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:25.229312+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:26.229464+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:27.229608+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:28.229838+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:29.230021+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:30.230620+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:31.230903+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:32.231831+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:33.232473+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:34.232673+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:35.233026+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:36.233755+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:37.234198+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:38.234490+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:39.234934+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:40.235200+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 991232 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:41.235365+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 974848 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:42.235491+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 974848 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:43.235653+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 974848 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:44.235898+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 974848 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:45.236088+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:46.236283+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:47.236520+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:48.236768+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:49.236985+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:50.237126+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:51.237251+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:52.237416+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:53.237584+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:54.237839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:55.238080+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:56.238256+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:57.238441+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:58.238623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:59.238818+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:00.238976+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:01.239154+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:02.239296+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:03.239744+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:04.240187+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:05.240505+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:06.240839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:07.241025+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:08.241234+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:09.241409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:10.241691+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:11.241995+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:12.242273+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:13.242481+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 966656 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:14.242624+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 958464 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:15.242877+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:16.243073+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:17.243285+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:18.243415+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:19.243614+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:20.243780+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:21.243946+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:22.244106+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:23.244295+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 950272 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:24.244507+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:25.244732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:26.244968+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:27.245238+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:28.245442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:29.245629+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:30.245747+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:31.245936+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:32.246099+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:33.246265+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:34.246410+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:35.246611+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:36.246752+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:37.246989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:38.247162+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:39.247351+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:40.247603+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:41.247859+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:42.248172+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:43.248411+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:44.248605+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:45.248759+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:46.248891+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:47.249010+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:48.249189+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:49.249309+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:50.249474+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:51.249596+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:52.249779+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:53.249901+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:54.250058+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:55.250368+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:56.250539+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:57.250711+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:58.250968+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:59.251261+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:00.251463+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:01.251650+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:02.251834+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:03.252082+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:04.252340+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:05.252592+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:06.252817+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:07.252989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:08.253157+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:09.253361+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:10.253540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:11.253764+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:12.254025+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:13.254292+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:14.254622+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76562432 unmapped: 942080 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:15.254907+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:16.255143+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:17.255455+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:18.255616+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:19.255872+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:20.256202+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:21.256404+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:22.256532+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:23.256814+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:24.257064+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:25.257766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:26.257983+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:27.258178+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:28.258297+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76578816 unmapped: 925696 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:29.258437+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:30.258639+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:31.258832+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:32.259010+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:33.259218+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:34.259377+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:35.259607+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:36.259797+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:37.259990+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:38.260134+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:39.260290+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:40.260503+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:41.260706+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:42.260927+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:43.261101+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:44.261725+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:45.261963+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:46.262259+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:47.262451+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:48.262640+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:49.262911+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:50.263097+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:51.263377+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:52.263554+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:53.263676+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:54.263868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:55.264103+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:56.264356+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:57.264490+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:58.264652+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:59.264891+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:00.265052+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:01.265245+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:02.265537+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:03.265784+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:04.265982+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:05.266192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:06.266436+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:07.266732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:08.267014+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:09.267213+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:10.267374+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:11.267591+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:12.267879+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:13.268188+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:14.268378+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 909312 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:15.268567+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:16.268747+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:17.268962+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:18.269198+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:19.269371+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:20.269534+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:21.269686+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:22.269865+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:23.270029+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 884736 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:24.270187+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 876544 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:25.270400+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 876544 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:26.270606+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 876544 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:27.276736+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 876544 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:28.276989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 876544 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:29.277195+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:30.277391+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:31.277559+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:32.277736+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:33.277946+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:34.278140+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:35.278417+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:36.278671+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:37.278959+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:38.279239+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:39.279430+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:40.279607+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:41.279800+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:42.280008+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:43.280248+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:44.280472+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:45.280868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:46.281152+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:47.281380+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:48.281617+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 860160 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:49.281798+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:50.281978+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:51.282179+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:52.282436+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:53.282694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:54.282966+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:55.283364+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:56.283620+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:57.283833+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:58.284047+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:59.284252+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:00.284417+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:01.284646+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:02.284848+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:03.285041+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 851968 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:04.285197+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 843776 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:05.285480+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 843776 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:06.285598+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:07.285740+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:08.285864+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:09.286185+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:10.286347+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:11.286566+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:12.286709+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:13.286901+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:14.287037+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:15.287199+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:16.287387+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:17.287548+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:18.287694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:19.287834+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:20.287993+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:21.288145+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:22.288311+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:23.288442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:24.288572+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:25.288714+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:26.288876+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:27.289092+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:28.289310+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:29.289493+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:30.289616+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:31.289787+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:32.290010+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:33.290190+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:34.290339+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:35.290540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:36.290672+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:37.290832+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:38.290983+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:39.291143+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:40.291307+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:41.291487+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:42.291613+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:43.291861+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:44.292003+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:45.294372+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:46.294541+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:47.294689+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:48.294837+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:49.294989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:50.295095+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:51.295281+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:52.295462+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:53.295621+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:54.295771+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:55.295951+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:56.296130+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:57.296233+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:58.296341+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:59.296444+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:00.296618+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:01.296806+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:02.296998+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:03.297172+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:04.297428+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:05.297644+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:06.297809+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:07.297971+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:08.298164+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:09.298305+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:10.298442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:11.298591+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:12.298720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:13.298831+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:14.298964+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:15.299159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 835584 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:16.299301+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 819200 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:17.299440+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 819200 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:18.299581+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:19.299743+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:20.299891+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 5933 writes, 25K keys, 5933 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5933 writes, 990 syncs, 5.99 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:21.300030+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:22.300153+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:23.300304+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:24.300454+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:25.300625+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:26.300798+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:27.300984+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:28.301131+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:29.301293+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:30.301427+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:31.301579+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 811008 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: mgrc ms_handle_reset ms_handle_reset con 0x5633a232c000
Jan 31 06:32:05 compute-0 ceph-osd[88127]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/945587794
Jan 31 06:32:05 compute-0 ceph-osd[88127]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/945587794,v1:192.168.122.100:6801/945587794]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: get_auth_request con 0x5633a45ce400 auth_method 0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: mgrc handle_mgr_configure stats_period=5
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:32.301733+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:33.301859+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:34.301995+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:35.302146+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:36.302277+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:37.302472+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:38.302615+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:39.302789+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:40.302937+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:41.303066+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:42.303207+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:43.303394+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:44.303533+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:45.303731+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:46.303970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:47.304233+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:48.304514+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:49.304694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:50.304868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:51.305020+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:52.305335+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:53.305688+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:54.305884+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:55.306090+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:56.306271+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:57.306447+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:58.306612+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:59.306781+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:00.306941+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:01.307107+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:02.307304+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:03.307468+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:04.307685+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:05.307919+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:06.308219+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:07.308400+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:08.308590+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:09.308785+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:10.309174+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:11.309346+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:12.309523+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:13.309734+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:14.309949+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:15.310170+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:16.310353+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:17.310526+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:18.310761+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:19.311788+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:20.311959+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:21.312128+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:22.312299+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:23.312484+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:24.312686+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:25.312937+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:26.313082+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:27.313344+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:28.313507+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:29.313806+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:30.313978+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:31.314187+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 596.897949219s of 600.002685547s, submitted: 90
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:32.314368+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 376832 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:33.314534+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 278528 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:34.314735+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 40960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:35.314914+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 40960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:36.315244+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:37.315395+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:38.315605+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:39.315762+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:40.315966+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:41.316102+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:42.316263+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:43.316421+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:44.316564+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:45.316773+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:46.316944+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:47.317165+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:48.317280+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:49.317409+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:50.317551+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:51.317706+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 32768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:52.317907+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:53.318091+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:54.318406+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:55.318694+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:56.318940+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:57.319095+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:58.319342+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:59.319478+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:00.319596+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:01.319745+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:02.319868+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:03.319990+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:04.320153+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:05.320367+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:06.320457+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:07.320644+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 24576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:08.320800+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:09.320935+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:10.321079+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:11.321220+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:12.321307+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:13.321477+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:14.321637+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:15.321824+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:16.321966+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:17.322192+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:18.322366+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:19.322508+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:20.322661+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:21.322815+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:22.323005+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:23.323132+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:24.323287+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:25.323459+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:26.323592+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:27.323715+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:28.323875+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:29.323971+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:30.324065+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:31.324210+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:32.324414+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:33.324582+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:34.324736+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:35.324913+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:36.325026+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:37.325134+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:38.325272+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:39.325448+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:40.325627+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:41.325781+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:42.325980+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:43.326189+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:44.326353+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:45.326557+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:46.326673+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:47.326841+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:48.327178+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:49.327336+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:50.327512+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 16384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:51.327639+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:52.327875+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:53.328060+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:54.328235+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:55.328453+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:56.328604+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:57.328753+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:58.328877+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:59.329030+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:00.329180+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xc96e4/0x198000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:01.329296+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:02.329460+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 163840 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: handle_auth_request added challenge on 0x5633a4a5d800
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:03.329616+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981525 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1073152 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 91.483062744s of 91.818931580s, submitted: 114
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:04.329814+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fce8f000/0x0/0x4ffc00000, data 0xcb280/0x19b000, compress 0x0/0x0/0x0, omap 0x11a4e, meta 0x2bbe5b2), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1073152 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _renew_subs
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:05.329989+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17645568 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:06.330185+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 136 ms_handle_reset con 0x5633a4a5d800 session 0x5633a3ea4c40
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: get_auth_request con 0x5633a232c800 auth_method 0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 16572416 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:07.330367+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 79814656 unmapped: 16572416 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: handle_auth_request added challenge on 0x5633a4a67800
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:08.330586+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064145 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 16408576 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:09.330766+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 79978496 unmapped: 16408576 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:10.330948+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fc219000/0x0/0x4ffc00000, data 0xd3ea5b/0xe13000, compress 0x0/0x0/0x0, omap 0x1213a, meta 0x2bbdec6), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _renew_subs
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 137 ms_handle_reset con 0x5633a4a67800 session 0x5633a47bb880
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 16375808 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:11.331079+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 16375808 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:12.331206+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _renew_subs
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fc214000/0x0/0x4ffc00000, data 0xd40613/0xe16000, compress 0x0/0x0/0x0, omap 0x12263, meta 0x2bbdd9d), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80011264 unmapped: 16375808 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 137 handle_osd_map epochs [137,138], i have 138, src has [1,138]
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:13.331315+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069181 data_alloc: 218103808 data_used: 5524
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:14.331442+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:15.331589+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:16.331732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:17.331956+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:18.332162+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:19.332313+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:20.332478+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:21.340096+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:22.340263+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _renew_subs
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:23.340395+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:24.340535+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:25.340732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:26.340869+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:27.341043+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 16359424 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:28.341175+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:29.341316+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:30.341457+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:31.341645+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:32.341796+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:33.342047+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:34.342185+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:35.342354+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:36.342496+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:37.342792+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:38.342926+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:39.343065+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:40.343235+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:41.343376+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:42.343591+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:43.343800+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:44.343986+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:45.344159+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:46.344301+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:47.344465+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:48.344625+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:49.344825+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:50.344984+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:51.345137+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:52.345276+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:53.345422+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:54.345563+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:55.345733+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:56.345897+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:57.346044+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:58.346227+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:59.346383+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:00.346536+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:01.346668+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:02.346826+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:03.347076+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:04.347262+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:05.352720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:06.352881+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:07.353035+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:08.353174+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:09.353321+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:10.353450+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:11.353595+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:12.353743+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:13.353866+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:14.354002+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:15.354240+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:16.354386+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:17.354501+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:18.354610+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:19.354757+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:20.354924+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:21.355073+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:22.355211+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:23.355373+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:24.355564+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:25.355767+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:26.355904+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:27.356054+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:28.356208+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:29.356386+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:30.356540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:31.356698+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:32.356816+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:33.356956+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:34.357063+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:35.357207+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:36.357341+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:37.357515+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:38.357707+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:39.357848+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:40.358040+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:41.358172+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:42.358287+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:43.358401+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:44.358561+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:45.358732+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:46.358880+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:47.359084+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:48.359273+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:49.359410+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:50.359540+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:51.359668+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:52.359765+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:53.359939+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:54.360099+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:55.360297+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:56.360483+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:57.360687+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:58.360852+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:59.361029+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:00.361202+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:01.361334+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:02.361461+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:03.361625+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:04.361799+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:05.361948+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:06.362065+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:07.362359+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:08.362663+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:09.362968+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:10.363385+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:11.363791+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:12.364074+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:13.364445+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:14.364832+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:15.365072+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80035840 unmapped: 16351232 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:16.365311+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:17.365534+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:18.365847+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:19.366007+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:20.366356+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:21.366666+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:22.366861+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:23.367171+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:24.367341+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:25.367645+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:26.367970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:27.368164+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:28.368322+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:29.368506+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:30.368693+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:31.368975+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:32.369211+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:33.369454+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:34.369720+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:35.369896+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:36.370231+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:37.370425+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:38.370666+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:39.370825+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:40.371068+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:41.371275+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:42.371411+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:43.371543+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:44.371654+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:45.371837+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:46.371954+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:47.372083+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:48.372229+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:49.372401+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:50.372575+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:51.372708+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:52.372839+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:53.372994+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:54.373186+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:55.373366+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:56.373522+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:57.373685+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:58.373817+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:59.373995+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:00.374226+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:01.374381+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:02.374501+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:03.374623+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:04.374749+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:05.374904+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:06.375031+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:07.375184+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:08.375322+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:09.375446+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:10.375597+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:11.375742+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:12.375877+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:13.376012+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:14.376209+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:15.376364+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:16.376457+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:17.376577+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:18.376657+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:19.376792+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:20.376943+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:21.377073+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:22.377177+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:23.377297+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:24.377462+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:25.377618+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:26.377755+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:27.377930+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:28.378057+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:29.378197+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:30.378410+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:31.378578+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:32.378693+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:33.378837+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:34.378984+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:35.379148+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:36.379291+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:37.379413+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:38.379521+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:39.379744+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:40.379896+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:41.380059+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:42.380235+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:43.380388+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:44.380970+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:45.381388+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:46.381878+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:47.383298+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:48.383435+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:49.383549+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:50.383659+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:51.383797+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:52.383950+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:53.384102+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:54.384256+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:55.384541+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:56.384758+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:57.384904+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:58.385025+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:59.385172+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:00.385313+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:01.385499+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:02.385668+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:03.385799+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:04.385967+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:05.386167+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:06.386267+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:07.386537+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:08.386692+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:09.386871+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:10.387083+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:11.387276+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:12.387510+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:13.387629+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 16343040 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:14.387748+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:15.387910+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:16.388056+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:17.388198+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:18.388325+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:19.388453+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:20.388603+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:21.388765+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:22.388869+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:23.388955+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:24.389149+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:25.389308+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:26.389484+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:27.389600+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:28.389785+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:29.389913+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:30.390021+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:31.390162+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80052224 unmapped: 16334848 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:32.390295+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 16146432 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'config diff' '{prefix=config diff}'
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'config show' '{prefix=config show}'
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:33.390439+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80650240 unmapped: 15736832 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:34.390590+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:05 compute-0 ceph-osd[88127]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:05 compute-0 ceph-osd[88127]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069309 data_alloc: 218103808 data_used: 6109
Jan 31 06:32:05 compute-0 ceph-osd[88127]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 15441920 heap: 96387072 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:05 compute-0 ceph-osd[88127]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd42092/0xe19000, compress 0x0/0x0/0x0, omap 0x122b0, meta 0x2bbdd50), peers [0,1] op hist [])
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: tick
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_tickets
Jan 31 06:32:05 compute-0 ceph-osd[88127]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:35.398369+0000)
Jan 31 06:32:05 compute-0 ceph-osd[88127]: do_command 'log dump' '{prefix=log dump}'
Jan 31 06:32:05 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 06:32:05 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/582564847' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 06:32:06 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776523626' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:06 compute-0 ceph-mon[75251]: from='client.14530 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:06 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3386774123' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4282221873' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3744215999' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/582564847' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 06:32:06 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14540 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:06 compute-0 ceph-mgr[75550]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 06:32:06 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: 2026-01-31T06:32:06.425+0000 7fc402b84640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 06:32:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 06:32:07 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114734766' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 06:32:07 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 06:32:07 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1843116303' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 06:32:08 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:08 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:08 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 06:32:08 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653434397' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 06:32:08 compute-0 podman[256223]: 2026-01-31 06:32:08.458992193 +0000 UTC m=+0.054724576 container health_status 5d4b856e5b047ec6a8a7503b33ec4559572b21ef6de42ca0078f0c084bc67b08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 06:32:08 compute-0 podman[256212]: 2026-01-31 06:32:08.485006356 +0000 UTC m=+0.079822742 container health_status 1e783390ff00c05f390135b60ef734ccf9db2d87ba2e12546059972a59c6f695 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '1d9fe9b2d7c8dc5fcc58cacc9d3f4c69729b940cb664a3a1e5ade20069d41596-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d-673b428bb0508c5b126a8e0d695f1f0502fcde3f3206406a3f645343308e141d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 06:32:08 compute-0 crontab[256320]: (root) LIST (root)
Jan 31 06:32:08 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/776523626' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 06:32:08 compute-0 ceph-mon[75251]: pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:08 compute-0 ceph-mon[75251]: from='client.14540 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:08 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1114734766' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 06:32:08 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1843116303' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 06:32:08 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 06:32:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150817308' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 06:32:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:52.143697+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 10223616 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:53.144104+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 10215424 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:54.144388+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 10207232 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976873 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:55.144529+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:24.320082+0000 osd.1 (osd.1) 160 : cluster [DBG] 6.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:24.333824+0000 osd.1 (osd.1) 161 : cluster [DBG] 6.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 161)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:24.320082+0000 osd.1 (osd.1) 160 : cluster [DBG] 6.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:24.333824+0000 osd.1 (osd.1) 161 : cluster [DBG] 6.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 10199040 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:56.144722+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 10199040 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:57.144828+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.843222618s of 10.869216919s, submitted: 4
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 10199040 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:58.144952+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:27.318056+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.5 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:27.328633+0000 osd.1 (osd.1) 163 : cluster [DBG] 8.5 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 163)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:27.318056+0000 osd.1 (osd.1) 162 : cluster [DBG] 8.5 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:27.328633+0000 osd.1 (osd.1) 163 : cluster [DBG] 8.5 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 10190848 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:59.145172+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 10190848 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981697 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:00.145290+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:29.347079+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.7 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:29.357645+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.7 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 10182656 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 165)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:29.347079+0000 osd.1 (osd.1) 164 : cluster [DBG] 11.7 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:29.357645+0000 osd.1 (osd.1) 165 : cluster [DBG] 11.7 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:01.145482+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:30.317653+0000 osd.1 (osd.1) 166 : cluster [DBG] 8.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:30.331745+0000 osd.1 (osd.1) 167 : cluster [DBG] 8.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 10182656 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 167)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:30.317653+0000 osd.1 (osd.1) 166 : cluster [DBG] 8.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:30.331745+0000 osd.1 (osd.1) 167 : cluster [DBG] 8.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:02.145648+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 10174464 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:03.145780+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 10174464 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:04.145957+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:33.303463+0000 osd.1 (osd.1) 168 : cluster [DBG] 11.1d scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:33.314047+0000 osd.1 (osd.1) 169 : cluster [DBG] 11.1d scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 10158080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986525 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 169)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:33.303463+0000 osd.1 (osd.1) 168 : cluster [DBG] 11.1d scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:33.314047+0000 osd.1 (osd.1) 169 : cluster [DBG] 11.1d scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:05.146193+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 10158080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:06.146396+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 10158080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:07.146551+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 10141696 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:08.146726+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:37.205910+0000 osd.1 (osd.1) 170 : cluster [DBG] 8.1e scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:37.216457+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.1e scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.832933426s of 10.859903336s, submitted: 10
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 171)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:37.205910+0000 osd.1 (osd.1) 170 : cluster [DBG] 8.1e scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:37.216457+0000 osd.1 (osd.1) 171 : cluster [DBG] 8.1e scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 10133504 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:09.146899+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:38.178002+0000 osd.1 (osd.1) 172 : cluster [DBG] 6.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:38.202693+0000 osd.1 (osd.1) 173 : cluster [DBG] 6.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 173)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:38.178002+0000 osd.1 (osd.1) 172 : cluster [DBG] 6.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:38.202693+0000 osd.1 (osd.1) 173 : cluster [DBG] 6.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 10133504 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991349 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:10.147073+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 10125312 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:11.147195+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 10125312 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:12.147304+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 10117120 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:13.147454+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:42.327533+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:42.338137+0000 osd.1 (osd.1) 175 : cluster [DBG] 8.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 175)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:42.327533+0000 osd.1 (osd.1) 174 : cluster [DBG] 8.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:42.338137+0000 osd.1 (osd.1) 175 : cluster [DBG] 8.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 10117120 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:14.147663+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:43.361301+0000 osd.1 (osd.1) 176 : cluster [DBG] 8.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:43.371864+0000 osd.1 (osd.1) 177 : cluster [DBG] 8.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 177)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:43.361301+0000 osd.1 (osd.1) 176 : cluster [DBG] 8.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:43.371864+0000 osd.1 (osd.1) 177 : cluster [DBG] 8.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 10108928 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996173 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:15.147886+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 10100736 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:16.148242+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 10092544 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:17.148407+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 10084352 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:18.148547+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:47.374204+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T05:59:47.384702+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 10084352 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 179)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:47.374204+0000 osd.1 (osd.1) 178 : cluster [DBG] 10.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T05:59:47.384702+0000 osd.1 (osd.1) 179 : cluster [DBG] 10.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:19.148723+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 10076160 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998588 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:20.148870+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 10076160 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:21.149000+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86392832 unmapped: 10067968 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:22.149185+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86392832 unmapped: 10067968 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:23.149347+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86392832 unmapped: 10067968 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:24.149531+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 10059776 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998588 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:25.149672+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 10059776 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:26.149802+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 10051584 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:27.149959+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 10051584 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:28.150095+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 10043392 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:29.150168+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 10035200 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998588 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:30.150285+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 10035200 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:31.150406+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 10027008 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:32.150521+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.284536362s of 24.410636902s, submitted: 8
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 10027008 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:33.150645+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:02.588695+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:02.599402+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 181)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:02.588695+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:02.599402+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 10018816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:34.150813+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 10018816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001001 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:35.150946+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 10018816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:36.151075+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:05.605808+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.11 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:05.616416+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.11 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 183)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:05.605808+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.11 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:05.616416+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.11 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 10010624 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:37.151284+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 10010624 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:38.151426+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86458368 unmapped: 10002432 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:39.151533+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86458368 unmapped: 10002432 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003416 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:40.151667+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86458368 unmapped: 10002432 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:41.151796+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 9994240 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:42.151922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.977724075s of 10.119025230s, submitted: 4
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 9994240 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:43.152058+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:12.707670+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:12.718195+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 185)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:12.707670+0000 osd.1 (osd.1) 184 : cluster [DBG] 10.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:12.718195+0000 osd.1 (osd.1) 185 : cluster [DBG] 10.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:44.152318+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 9986048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:45.152544+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 9986048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005831 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:46.152691+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 9986048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:47.152822+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 9977856 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:48.152982+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 9977856 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:49.153104+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 9969664 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:50.153327+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 9969664 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005831 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:51.153438+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 9969664 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:52.153580+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 9961472 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.108231544s of 10.116699219s, submitted: 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:53.153744+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:22.823896+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:22.834465+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 9961472 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 187)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:22.823896+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.13 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:22.834465+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.13 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:54.154197+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 9945088 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:55.154486+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:24.832559+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:24.843138+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 9920512 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010659 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 189)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:24.832559+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:24.843138+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:56.154696+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 9912320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:57.154818+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 9912320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:58.154962+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 9912320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:59.155074+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 9904128 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:00.155156+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 9904128 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010659 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:01.155293+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 9904128 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:02.155442+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 9895936 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:03.155605+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:32.753211+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.6 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:32.763709+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.6 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 9895936 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.769048691s of 10.969296455s, submitted: 6
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 191)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:32.753211+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.6 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:32.763709+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.6 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:04.155773+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:33.793787+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:33.804343+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 9887744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 193)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:33.793787+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.b scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:33.804343+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.b scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:05.155935+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:34.762817+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:34.773417+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 9887744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017900 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 195)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:34.762817+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.19 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:34.773417+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.19 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:06.156096+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86573056 unmapped: 9887744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:07.156378+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 9879552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:08.156667+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86581248 unmapped: 9879552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:09.156787+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 9871360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:10.156924+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 9871360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017900 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:11.157073+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86589440 unmapped: 9871360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:12.157186+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 9863168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:13.157405+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 9863168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.859642982s of 10.026967049s, submitted: 6
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:14.157554+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:43.703797+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:43.717911+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 9838592 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 197)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:43.703797+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:43.717911+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:15.157735+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 9838592 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020315 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:16.157872+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 9838592 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:17.157994+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86630400 unmapped: 9830400 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:18.158185+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86630400 unmapped: 9830400 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:19.158347+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:48.747381+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:48.761546+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 199)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:48.747381+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:48.761546+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 9822208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:20.158560+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 9822208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022730 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:21.158695+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86638592 unmapped: 9822208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:22.158812+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 9805824 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:23.158937+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86654976 unmapped: 9805824 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894059181s of 10.031656265s, submitted: 4
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:24.159089+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:53.727917+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:00:53.757948+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86671360 unmapped: 9789440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 201)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:53.727917+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:00:53.757948+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:25.159334+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 9797632 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025143 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:26.159448+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 9797632 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:27.159572+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86671360 unmapped: 9789440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:28.159743+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86671360 unmapped: 9789440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:29.159859+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86679552 unmapped: 9781248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:30.160070+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86679552 unmapped: 9781248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025143 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:31.160231+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86679552 unmapped: 9781248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:32.160355+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86687744 unmapped: 9773056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:33.160496+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86687744 unmapped: 9773056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:34.160616+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9764864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:35.160761+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025143 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86695936 unmapped: 9764864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:36.160888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 9756672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:37.161088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 9756672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:38.161396+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9740288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:39.161538+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86720512 unmapped: 9740288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:40.161663+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025143 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.838331223s of 16.838333130s, submitted: 0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 9715712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:41.161784+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:10.690855+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:10.736771+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 203)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:10.690855+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:10.736771+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 9715712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:42.163240+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:43.163405+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86745088 unmapped: 9715712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:44.163667+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:13.732211+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:13.778106+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86753280 unmapped: 9707520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 205)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:13.732211+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.2 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:13.778106+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.2 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:45.163889+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86753280 unmapped: 9707520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029967 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:46.164066+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86761472 unmapped: 9699328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:47.164410+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86761472 unmapped: 9699328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:48.164641+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 9691136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:49.164815+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86769664 unmapped: 9691136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:50.164957+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 9682944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029967 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:51.165199+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86777856 unmapped: 9682944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.005162239s of 11.064388275s, submitted: 4
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:52.165336+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:21.755258+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.0 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:21.811580+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.0 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86786048 unmapped: 9674752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 207)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:21.755258+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.0 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:21.811580+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.0 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:53.165520+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:22.741061+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:22.786818+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 9666560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 209)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:22.741061+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:22.786818+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:54.166307+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:23.758422+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:23.807754+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 9658368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 211)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:23.758422+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.4 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:23.807754+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.4 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:55.166524+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 9650176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037328 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:56.166641+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 9650176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:57.166775+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86810624 unmapped: 9650176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:58.166951+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:27.656644+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:27.688330+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 9641984 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 213)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:27.656644+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.1a scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:27.688330+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.1a scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:59.167109+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:28.644697+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:28.662376+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86843392 unmapped: 9617408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 215)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:28.644697+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.10 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:28.662376+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.10 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:00.167312+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:29.672967+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:29.701249+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86851584 unmapped: 9609216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044567 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 217)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:29.672967+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.12 scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:29.701249+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.12 scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:01.167496+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:30.692674+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  will send 2026-01-31T06:01:30.724472+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 9601024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client handle_log_ack log(last 219)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:30.692674+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Jan 31 06:32:09 compute-0 ceph-osd[87070]: log_client  logged 2026-01-31T06:01:30.724472+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:02.167703+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86859776 unmapped: 9601024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:03.167827+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 9592832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:04.167973+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 9592832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:05.168134+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 9584640 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:06.168257+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 9584640 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:07.168385+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 9576448 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:08.168544+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86884352 unmapped: 9576448 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:09.168660+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 9568256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:10.168804+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 9568256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:11.168951+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86892544 unmapped: 9568256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:12.169161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 9560064 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:13.169324+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86900736 unmapped: 9560064 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:14.169520+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 9551872 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:15.169672+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 9551872 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:16.169968+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 9551872 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:17.170137+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 9543680 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:18.170293+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86917120 unmapped: 9543680 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:19.170442+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9535488 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:20.171042+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 9535488 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:21.171226+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 9527296 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:22.171405+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 9527296 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:23.171569+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 9527296 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:24.171745+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 9519104 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:25.171924+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86941696 unmapped: 9519104 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:26.172092+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 9510912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:27.172286+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 9510912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:28.172431+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 9510912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:29.172570+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9502720 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:30.172765+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9502720 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:31.172945+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 9494528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:32.173077+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 9494528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:33.173292+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 9486336 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:34.173470+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 9486336 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:35.173584+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86974464 unmapped: 9486336 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:36.173725+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 9478144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:37.173860+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 9478144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:38.174080+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 9478144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:39.174203+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 9469952 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:40.174346+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86990848 unmapped: 9469952 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:41.174470+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 9461760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:42.174629+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 9461760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:43.174758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 86999040 unmapped: 9461760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:44.174990+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 9445376 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:45.175164+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 9445376 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:46.175400+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 9437184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:47.176570+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 9437184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:48.177626+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 9428992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:49.177953+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 9428992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:50.178749+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 9428992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:51.179240+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 9420800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:52.179555+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87040000 unmapped: 9420800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:53.179682+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 9412608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:54.179989+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87048192 unmapped: 9412608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:55.180123+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87056384 unmapped: 9404416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:56.180280+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87056384 unmapped: 9404416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:57.180485+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87056384 unmapped: 9404416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:58.180686+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87064576 unmapped: 9396224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:59.180850+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87064576 unmapped: 9396224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:00.181228+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 9388032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:01.181392+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 9388032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:02.181536+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87072768 unmapped: 9388032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:03.181819+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87080960 unmapped: 9379840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:04.181992+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87089152 unmapped: 9371648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:05.182211+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87097344 unmapped: 9363456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:06.182352+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87097344 unmapped: 9363456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:07.182509+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 9355264 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:08.182756+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 9355264 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:09.182987+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 9355264 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:10.183131+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 9347072 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:11.183282+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87113728 unmapped: 9347072 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:12.183452+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 9338880 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:13.183644+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 9338880 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:14.183809+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87121920 unmapped: 9338880 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:15.183925+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 9330688 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:16.184042+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 9330688 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:17.184166+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87138304 unmapped: 9322496 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:18.184316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87138304 unmapped: 9322496 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:19.184427+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 9314304 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:20.184628+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 9314304 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:21.184812+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87146496 unmapped: 9314304 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:22.184939+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87154688 unmapped: 9306112 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:23.185198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87154688 unmapped: 9306112 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:24.185340+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87162880 unmapped: 9297920 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:25.185488+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87162880 unmapped: 9297920 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:26.185617+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87162880 unmapped: 9297920 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:27.185743+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87171072 unmapped: 9289728 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:28.185902+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 9281536 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:29.186058+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 9281536 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:30.186246+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 9281536 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:31.186386+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87179264 unmapped: 9281536 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:32.186556+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9273344 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:33.186765+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9273344 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:34.186895+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9265152 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:35.187016+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9265152 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:36.187180+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 9256960 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:37.187328+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87203840 unmapped: 9256960 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:38.187502+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 9248768 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:39.187649+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 9248768 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:40.187859+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87212032 unmapped: 9248768 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:41.188050+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 9240576 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:42.188195+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87220224 unmapped: 9240576 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:43.188308+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9232384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:44.188482+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9232384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:45.188655+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9232384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:46.188768+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 9224192 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:47.188890+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87236608 unmapped: 9224192 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:48.189061+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 9216000 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:49.189249+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 9216000 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:50.189391+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 9216000 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:51.189522+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 9207808 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:52.189608+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 9207808 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:53.189737+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9199616 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:54.189898+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9199616 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:55.190034+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 9191424 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:56.190165+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 9191424 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:57.190285+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87269376 unmapped: 9191424 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:58.190435+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9183232 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:59.190546+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9183232 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:00.190696+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 9175040 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:01.190835+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 9175040 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:02.190970+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 9166848 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:03.191077+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 9166848 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:04.191161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 9166848 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:05.191263+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 9158656 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:06.191377+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87302144 unmapped: 9158656 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:07.191490+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 9150464 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:08.191642+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87310336 unmapped: 9150464 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:09.191791+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:10.191938+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:11.192064+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:12.192207+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 9125888 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:13.192495+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 9125888 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:14.192626+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 9142272 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:15.192754+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 9142272 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:16.192959+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 9142272 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:17.193695+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 9142272 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:18.194225+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87318528 unmapped: 9142272 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:19.194418+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:20.194676+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:21.195010+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87326720 unmapped: 9134080 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:22.195159+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 9125888 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:23.195355+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87334912 unmapped: 9125888 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:24.195517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 9117696 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:25.195647+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87343104 unmapped: 9117696 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:26.195828+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 9109504 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:27.196005+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 9109504 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:28.196212+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 9109504 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:29.196340+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 9101312 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:30.196481+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 9101312 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:31.196755+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 9093120 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:32.197156+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 9093120 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:33.197365+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87367680 unmapped: 9093120 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:34.197482+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 9084928 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:35.197604+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 9084928 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:36.197714+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87384064 unmapped: 9076736 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:37.197837+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87384064 unmapped: 9076736 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:38.198067+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9068544 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:39.198217+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9068544 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:40.198368+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9068544 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:41.198500+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87400448 unmapped: 9060352 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:42.198665+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87400448 unmapped: 9060352 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:43.198793+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 9052160 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:44.198928+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 9052160 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:45.199059+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 9043968 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:46.199184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 9043968 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:47.199280+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 9035776 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:48.199456+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 9035776 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:49.199614+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 9027584 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:50.199733+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 9027584 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:51.199888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 9027584 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:52.200018+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 9019392 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:53.200175+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 9019392 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:54.200348+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 9019392 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:55.200491+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 9011200 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:56.200660+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 9011200 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:57.200839+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 9003008 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:58.201051+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 9003008 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:59.201221+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 8994816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:00.201322+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 8994816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:01.201467+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 8994816 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:02.201592+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 8986624 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:03.201705+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 8986624 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:04.201811+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 8978432 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:05.201917+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 8978432 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:06.202019+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 8970240 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:07.202146+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 8970240 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:08.202261+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 8962048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:09.202374+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 8962048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:10.202499+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 8962048 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:11.202609+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 8953856 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:12.202730+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 8953856 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:13.202849+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 8312 writes, 34K keys, 8312 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8312 writes, 1633 syncs, 5.09 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8312 writes, 34K keys, 8312 commit groups, 1.0 writes per commit group, ingest: 21.26 MB, 0.04 MB/s
                                           Interval WAL: 8312 writes, 1633 syncs, 5.09 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87572480 unmapped: 8888320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:14.203002+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87572480 unmapped: 8888320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:15.203182+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87572480 unmapped: 8888320 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:16.203306+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 8880128 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:17.203466+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 8880128 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:18.203675+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 8871936 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:19.203834+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 8871936 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:20.203960+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 8863744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:21.204095+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 8863744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:22.204278+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87597056 unmapped: 8863744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:23.204478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87605248 unmapped: 8855552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:24.204610+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87605248 unmapped: 8855552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:25.204724+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87613440 unmapped: 8847360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:26.204893+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87613440 unmapped: 8847360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:27.205027+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87613440 unmapped: 8847360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:28.205183+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 8839168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:29.206294+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87621632 unmapped: 8839168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:30.206448+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 8830976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:31.206926+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87629824 unmapped: 8830976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:32.207229+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8822784 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:33.207782+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8822784 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:34.207949+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8822784 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:35.208086+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 8814592 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:36.208512+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 8814592 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:37.208643+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 8806400 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:38.209021+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87654400 unmapped: 8806400 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:39.209192+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87662592 unmapped: 8798208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:40.209517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87662592 unmapped: 8798208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:41.209841+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87670784 unmapped: 8790016 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:42.210037+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87670784 unmapped: 8790016 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:43.210184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87670784 unmapped: 8790016 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:44.210307+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87678976 unmapped: 8781824 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:45.210507+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87678976 unmapped: 8781824 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:46.210743+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87687168 unmapped: 8773632 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:47.210891+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87687168 unmapped: 8773632 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:48.211045+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87687168 unmapped: 8773632 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:49.211289+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 8765440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:50.211530+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 8765440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:51.211704+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 8765440 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:52.211833+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 8757248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:53.211969+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87703552 unmapped: 8757248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:54.212191+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 8749056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:55.212352+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 8749056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:56.212562+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87719936 unmapped: 8740864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:57.212710+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87719936 unmapped: 8740864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:58.212968+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87719936 unmapped: 8740864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:59.213184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87728128 unmapped: 8732672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:00.213534+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:01.213906+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87728128 unmapped: 8732672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:02.214052+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 8724480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:03.214219+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 8724480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:04.214500+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 8724480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:05.214664+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87744512 unmapped: 8716288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:06.215066+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87744512 unmapped: 8716288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:07.215362+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 8708096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:08.215694+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 8708096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:09.215935+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 8699904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:10.216174+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 8699904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:11.216354+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 8699904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:12.216507+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 8691712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:13.216686+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 8691712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:14.216879+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 8683520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:15.217033+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 8683520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:16.217239+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 8683520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:17.217406+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 8675328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:18.218338+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 8675328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:19.218495+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 8667136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:20.218713+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 8667136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:21.218861+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87801856 unmapped: 8658944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:22.219015+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87801856 unmapped: 8658944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:23.219168+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87801856 unmapped: 8658944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:24.219309+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 8650752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:25.219465+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87810048 unmapped: 8650752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:26.219609+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 8642560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:27.219737+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 8642560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:28.219868+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87818240 unmapped: 8642560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:29.219990+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87826432 unmapped: 8634368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:30.220160+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 8626176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:31.220340+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 8626176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 280.182006836s of 280.244415283s, submitted: 14
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:32.220453+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 8601600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:33.220568+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 8626176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:34.220689+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 8568832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:35.220913+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 8568832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:36.221068+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 8544256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:37.221210+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 8544256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:38.221406+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 8544256 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:39.221580+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 8536064 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:40.221773+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 8536064 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:41.222003+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 8536064 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:42.222196+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 8527872 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:43.222354+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 8527872 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:44.222539+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 8519680 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:45.222699+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 8519680 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:46.222807+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 8511488 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:47.222928+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 8511488 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:48.223072+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 8511488 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:49.223238+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 8503296 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:50.223372+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87957504 unmapped: 8503296 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:51.223623+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87965696 unmapped: 8495104 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:52.223804+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87965696 unmapped: 8495104 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:53.223987+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87973888 unmapped: 8486912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:54.224193+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87973888 unmapped: 8486912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:55.224346+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87973888 unmapped: 8486912 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:56.224486+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 8478720 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:57.224624+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 8478720 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:58.225283+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87982080 unmapped: 8478720 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:59.225430+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87990272 unmapped: 8470528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:00.225611+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87990272 unmapped: 8470528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:01.225714+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 8462336 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:02.225917+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 87998464 unmapped: 8462336 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:03.226061+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 8454144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:04.226253+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 8454144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:05.226394+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88014848 unmapped: 8445952 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:06.226511+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88014848 unmapped: 8445952 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:07.226644+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88014848 unmapped: 8445952 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:08.226837+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 8437760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:09.226942+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 8437760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:10.227074+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88031232 unmapped: 8429568 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:11.227191+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88031232 unmapped: 8429568 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:12.227332+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 8421376 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:13.227460+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 8421376 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:14.227587+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 8421376 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:15.227726+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:16.227862+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:17.227990+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:18.228161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:19.228297+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:20.228489+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:21.228610+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:22.228724+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:23.228878+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88047616 unmapped: 8413184 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:24.229033+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:25.229244+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:26.229395+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:27.229540+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:28.229745+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:29.229917+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:30.230056+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:31.230169+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:32.230299+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:33.230418+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:34.230554+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:35.230677+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:36.230819+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:37.230927+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:38.231088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 8404992 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:39.231198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:40.231347+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:41.231478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:42.231620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:43.231804+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:44.231928+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:45.232046+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:46.232184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:47.232316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:48.232491+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:49.232593+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 8396800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:50.232791+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:51.232941+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:52.233078+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:53.233213+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:54.233329+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:55.233477+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:56.233622+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:57.233766+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:58.233912+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:59.234065+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:00.234198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:01.234352+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:02.234490+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:03.234617+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:04.234760+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:05.234886+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:06.235063+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:07.235356+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:08.235553+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:09.235683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:10.235848+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:11.236043+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:12.236241+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:13.236391+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:14.236548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:15.242824+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:16.242993+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:17.243193+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:18.243389+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:19.243534+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:20.243669+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:21.243803+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:22.244057+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:23.244195+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:24.244421+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:25.244598+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:26.244746+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:27.244861+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:28.244983+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:29.245191+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:30.245318+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:31.245432+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:32.245539+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:33.245672+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:34.245785+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:35.245983+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:36.246165+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:37.246298+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:38.246430+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:39.246581+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:40.246816+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:41.246932+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:42.247058+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:43.247201+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:44.247469+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:45.247625+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:46.247734+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:47.247854+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:48.248020+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:49.248146+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:50.248290+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:51.248413+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:52.248586+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:53.248698+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:54.248860+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:55.249057+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:56.249307+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88072192 unmapped: 8388608 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:57.249446+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:58.249660+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:59.249787+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:00.249943+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:01.250123+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:02.250275+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:03.255412+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:04.255553+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:05.255690+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:06.255820+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:07.255942+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:08.256203+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:09.256373+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:10.256517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:11.256654+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:12.256785+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:13.256923+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:14.257038+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88080384 unmapped: 8380416 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:15.257215+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 8372224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:16.257345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 8372224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:17.257646+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 8372224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:18.257799+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 8372224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:19.257922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88088576 unmapped: 8372224 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:20.258050+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 8364032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:21.258213+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 8364032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:22.258414+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 8364032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:23.258543+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88096768 unmapped: 8364032 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:24.258706+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:25.258930+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:26.259149+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:27.259340+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:28.259492+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:29.259676+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:30.259949+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:31.260156+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:32.260281+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:33.260403+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:34.260539+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:35.260688+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:36.260810+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:37.260961+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:38.272656+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:39.272791+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:40.272978+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:41.273090+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:42.273243+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:43.274167+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:44.274375+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:45.274576+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:46.274739+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:47.274967+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:48.275169+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:49.275297+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88104960 unmapped: 8355840 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:50.275608+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:51.275760+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:52.275988+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:53.276214+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:54.276339+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:55.276456+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:56.276591+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:57.276732+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:58.276891+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:59.277090+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:00.277261+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:01.277431+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:02.277556+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:03.277760+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:04.277931+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:05.278171+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:06.278313+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:07.278456+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88113152 unmapped: 8347648 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:08.278717+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:09.278903+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:10.279036+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:11.279189+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:12.279331+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:13.279499+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:14.279615+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:15.279758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:16.279937+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:17.280163+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:18.280420+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:19.280573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:20.280713+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:21.280886+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:22.281106+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88121344 unmapped: 8339456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc ms_handle_reset ms_handle_reset con 0x562643ab4000
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/945587794
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/945587794,v1:192.168.122.100:6801/945587794]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: get_auth_request con 0x562646b74800 auth_method 0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc handle_mgr_configure stats_period=5
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:23.281340+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x562642e06400 session 0x562643633c00
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x562646912000
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x56264406d000 session 0x56264340ec40
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x56264406d800
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:24.281474+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:25.281582+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:26.281776+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:27.281916+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:28.282095+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:29.282257+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:30.282447+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:31.282602+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:32.282745+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:33.282893+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:34.283099+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:35.283354+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:36.283566+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:37.283777+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:38.283948+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:39.284094+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:40.284345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:41.284486+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:42.284637+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:43.284782+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:44.284900+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:45.285040+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:46.285264+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:47.285409+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:48.285589+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:49.285756+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:50.286008+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:51.286219+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:52.286727+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:53.286856+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:54.286986+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:55.287129+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:56.287284+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:57.287494+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:58.287751+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:59.287904+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:00.288042+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:01.288223+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:02.288366+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:03.288517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:04.288654+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:05.288855+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:06.289088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:07.289306+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:08.289466+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:09.289649+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:10.289803+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:11.289939+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:12.290103+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:13.290254+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:14.290450+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:15.290826+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:16.290987+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:17.291137+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:18.291323+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:19.291468+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:20.291595+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:21.291752+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:22.291908+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:23.292438+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:24.292573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:25.292714+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:26.292925+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:27.293101+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:28.293368+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:29.293624+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:30.293793+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:31.293931+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 297.631713867s of 300.224700928s, submitted: 90
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:32.294100+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:33.294316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88727552 unmapped: 7733248 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:34.294466+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:35.294642+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:36.294865+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:37.295031+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:38.295258+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:39.295386+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:40.295607+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:41.295776+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:42.295922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:43.296070+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:44.296175+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:45.296336+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:46.296453+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:47.296579+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:48.296770+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:49.297064+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:50.297397+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:51.298237+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:52.298454+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:53.298616+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:54.298756+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:55.298915+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:56.299048+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:57.299213+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:58.299396+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:59.299564+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:00.299703+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:01.299829+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:02.299956+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:03.300097+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:04.300271+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:05.300434+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:06.300545+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:07.300691+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:08.300851+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:09.301174+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:10.301292+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:11.301459+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:12.301620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:13.301778+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:14.301949+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:15.302088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:16.302190+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:17.302364+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:18.302604+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:19.302765+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:20.303006+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:21.303666+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:22.304349+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:23.304479+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:24.304607+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:25.304737+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:26.304911+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:27.305026+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:28.305208+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:29.305335+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:30.305490+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88743936 unmapped: 7716864 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:31.305696+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:32.305815+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:33.305929+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:34.306041+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:35.306209+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:36.306398+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:37.306512+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:38.306664+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:39.306789+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:40.306896+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:41.307022+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:42.307161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:43.307386+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:44.307531+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:45.307701+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:46.307862+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:47.308044+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:48.308165+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:49.308325+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:50.308557+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:51.308718+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:52.308855+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:53.309033+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:54.309283+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:55.309439+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:56.309690+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:57.309905+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:58.310145+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:59.310653+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:00.310864+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:01.311236+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:02.311573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:03.311906+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:04.312235+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:05.317985+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:06.318385+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:07.318719+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:08.319171+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:09.319252+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:10.361259+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:11.361552+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:12.361654+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:13.361858+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:14.362056+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:15.362212+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:16.362458+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:17.362656+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:18.362834+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:19.362976+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:20.363144+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:21.363298+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:22.363476+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:23.363620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:24.363764+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:25.363899+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:26.364080+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:27.364249+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:28.364445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:29.364592+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:30.364792+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:31.364950+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:32.365063+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:33.365218+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:34.365354+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:35.365445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:36.365548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:37.365688+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:38.365861+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:39.366005+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:40.366182+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:41.366315+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:42.366437+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:43.366617+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:44.366759+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:45.366885+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:46.367023+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:47.367380+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:48.367616+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:49.367752+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:50.367932+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:51.368106+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:52.368301+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:53.368441+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:54.368630+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:55.368929+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:56.369171+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:57.369379+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:58.369599+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:59.369773+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:00.369950+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:01.370226+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:02.370402+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:03.370548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:04.370732+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:05.370895+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:06.371086+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:07.371232+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:08.371434+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:09.371578+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:10.371657+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:11.371781+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:12.371902+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:13.372068+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:14.372229+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:15.372373+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:16.372489+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:17.372585+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:18.372752+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:19.372860+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:20.372957+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:21.373104+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:22.373356+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:23.373525+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:24.373719+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:25.373885+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:26.374028+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:27.374135+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:28.374260+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:29.374500+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:30.374677+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:31.374839+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:32.374945+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:33.375074+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:34.375288+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:35.375431+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:36.375613+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:37.375745+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:38.375988+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:39.376172+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:40.376302+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:41.376443+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:42.376595+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:43.376758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:44.377018+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:45.377259+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:46.377569+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:47.377765+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:48.377979+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:49.378242+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:50.378375+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:51.378605+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:52.378769+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:53.379230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:54.379445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:55.379653+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:56.379866+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:57.380096+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:58.380395+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:59.380552+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:00.380809+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:01.381021+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:02.381193+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:03.381373+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:04.381571+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:05.381815+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:06.382069+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:07.382326+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:08.384666+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:09.384831+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:10.384994+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:11.385223+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:12.385378+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:13.385599+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 8536 writes, 35K keys, 8536 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8536 writes, 1745 syncs, 4.89 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceba30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562641ceb8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:14.385766+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:15.386012+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:16.386187+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:17.386361+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:18.386575+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:19.386733+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:20.386907+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:21.387071+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:22.387257+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:23.387443+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:24.387568+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:25.387716+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:26.387859+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:27.388016+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:28.388245+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:29.388383+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:30.388537+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:31.388669+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:32.388876+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:33.389022+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:34.389198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:35.389331+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:36.389515+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:37.389659+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:38.389855+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:39.389986+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:40.390198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:41.390353+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:42.390533+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:43.390680+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:44.390822+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:45.390975+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:46.391205+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:47.391407+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:48.391582+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7839744 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:49.391747+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:50.391926+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:51.392083+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:52.392372+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:53.392498+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:54.392633+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:55.392801+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:56.392977+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:57.393256+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:58.393445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:59.393581+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7831552 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:00.393694+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:01.393873+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:02.394037+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:03.394228+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:04.394405+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:05.394626+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:06.394758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:07.394962+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:08.395201+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88637440 unmapped: 7823360 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:09.395358+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:10.395517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:11.395673+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:12.395877+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:13.396018+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:14.396199+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:15.396316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:16.396472+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:17.396634+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:18.396810+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7815168 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:19.397006+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:20.397226+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:21.397450+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:22.397609+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:23.397789+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:24.397922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:25.398031+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:26.398230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:27.398470+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:28.398662+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:29.398793+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:30.398984+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:31.399172+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:32.399351+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.773590088s of 300.564941406s, submitted: 22
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:33.399488+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:34.399628+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88653824 unmapped: 7806976 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:35.399768+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88686592 unmapped: 7774208 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:36.399948+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88702976 unmapped: 7757824 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:37.400161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88735744 unmapped: 7725056 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:38.400326+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:39.400516+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:40.400774+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:41.400919+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:42.401103+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:43.401370+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:44.401458+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:45.401575+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:46.401735+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:47.401922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:48.402085+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:49.402333+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:50.402493+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:51.402772+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:52.402934+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:53.403142+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:54.403368+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:55.403522+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:56.403683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:57.403885+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:58.404345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:59.404532+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:00.404710+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:01.404940+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:02.405264+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:03.405485+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:04.405691+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:05.405824+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:06.406103+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:07.406683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:08.407006+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:09.407192+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:10.407425+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:11.407585+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:12.407797+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:13.407989+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:14.408171+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:15.408522+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:16.408741+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:17.409159+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:18.409324+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:19.409445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:20.410489+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:21.414979+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:22.417603+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:23.420379+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:24.420726+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:25.422186+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread fragmentation_score=0.000145 took=0.000037s
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:26.422575+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:27.424208+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:28.424505+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:29.424650+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:30.424795+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:31.424965+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:32.425237+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:33.425461+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:34.425601+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:35.425802+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:36.426028+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:37.426302+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:38.426555+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:39.426971+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:40.427182+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:41.427501+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:42.427674+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:43.428010+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:44.428177+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:45.428345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:46.428495+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:47.428630+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:48.428805+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:49.428987+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:50.429194+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:51.429350+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:52.429495+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:53.429718+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:54.429930+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:55.430070+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:56.430279+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:57.430450+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:58.430662+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:59.430857+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:00.431014+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:01.431189+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:02.431353+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:03.431506+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:04.431656+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:05.431857+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:06.432063+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:07.432250+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:08.432463+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:09.432693+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:10.432859+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:11.433057+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:12.433255+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 7708672 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:13.433368+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:14.433494+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:15.433652+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:16.433847+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:17.434013+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:18.434176+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:19.434339+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:20.434474+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:21.434635+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:22.434799+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:23.434990+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:24.435178+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:25.435316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:26.435461+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:27.435620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:28.435772+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:29.435934+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:30.436089+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:31.436238+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:32.436420+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:33.436651+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:34.436913+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:35.437086+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:36.437261+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:37.437390+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:38.437573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 7700480 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:39.437800+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:40.437975+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:41.438220+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:42.438347+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:43.438506+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:44.438652+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:45.438823+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:46.439008+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:47.439160+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:48.439335+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:49.439468+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:50.439729+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:51.439931+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:52.440065+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:53.440225+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:54.440408+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:55.440615+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:56.440906+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:57.441085+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:58.441290+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:59.441452+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:00.441601+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:01.441868+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:02.442025+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:03.442214+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88768512 unmapped: 7692288 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:04.442349+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:05.442518+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:06.442660+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:07.442830+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:08.443155+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:09.443299+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:10.443432+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:11.443586+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:12.443715+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:13.443892+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:14.444026+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:15.444184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:16.444304+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:17.444505+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:18.444716+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:19.444873+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:20.445057+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88776704 unmapped: 7684096 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:21.445226+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:22.445375+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:23.445550+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:24.445742+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:25.445990+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:26.446248+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:27.446451+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:28.446653+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:29.446886+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:30.447188+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:31.447418+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:32.447685+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:33.448013+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:34.448192+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:35.448343+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:36.448450+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:37.448579+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:38.449032+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:39.449189+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:40.449367+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:41.449546+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:42.449723+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:43.449907+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88784896 unmapped: 7675904 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:44.450173+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:45.450408+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:46.450648+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:47.450804+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:48.451094+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:49.451292+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:50.451460+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:51.451622+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:52.451761+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:53.451935+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:54.452210+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:55.452371+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:56.452702+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:57.452904+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:58.453081+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:59.453620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:00.453893+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:01.454206+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:02.454502+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:03.456478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:04.458183+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:05.458596+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:06.459439+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:07.459572+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88793088 unmapped: 7667712 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:08.460314+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:09.460835+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:10.461483+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:11.462221+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:12.462401+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:13.462517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:14.462646+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:15.462826+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:16.463170+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:17.463465+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:18.463853+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:19.464192+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:20.464380+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:21.464548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:22.464690+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:23.464837+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:24.464977+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:25.465121+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:26.465256+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:27.465401+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:28.466459+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:29.466760+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:30.467091+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:31.467541+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:32.468179+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 7659520 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:33.468436+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:34.468696+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:35.468848+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:36.469336+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:37.469635+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:38.469916+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:39.470378+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:40.470532+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:41.470698+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:42.470911+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:43.471053+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:44.471280+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:45.471548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:46.471750+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:47.471947+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:48.472186+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88809472 unmapped: 7651328 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:49.472344+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:50.472487+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:51.472637+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:52.472771+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:53.473023+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:54.473258+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:55.473573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:56.473770+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:57.473920+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:58.474198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88817664 unmapped: 7643136 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:59.474359+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:00.474605+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:01.475070+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:02.475456+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:03.475627+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:04.475838+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:05.476026+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:06.476280+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:07.476451+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:08.476645+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:09.476778+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:10.476921+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:11.477052+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:12.477206+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:13.477357+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:14.477480+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:15.477811+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:16.478024+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:17.478148+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:18.478372+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:19.478537+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:20.478733+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:21.478956+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:22.479157+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:23.479290+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:24.479501+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:25.479683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:26.479859+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:27.480090+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:28.480370+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:29.480500+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:30.480633+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:31.480784+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:32.480954+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:33.481143+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:34.481306+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:35.481472+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:36.481674+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:37.481883+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:38.482082+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:39.482244+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:40.482384+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:41.482664+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:42.482857+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:43.483047+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:44.483212+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:45.483346+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:46.483478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:47.483614+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:48.483867+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:49.483997+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:50.484152+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:51.484408+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:52.485257+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:53.485542+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:54.485703+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:55.485975+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:56.486236+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:57.486465+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:58.486858+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:59.487099+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:00.487346+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:01.487602+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:02.487787+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:03.487963+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:04.488130+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:05.488326+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:06.488507+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:07.488681+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:08.489176+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:09.489319+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:10.489487+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:11.489692+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:12.489891+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:13.490063+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:14.490230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:15.490458+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88825856 unmapped: 7634944 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:16.490624+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:17.490831+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:18.491053+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:19.491296+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:20.491532+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:21.491723+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:22.491980+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:23.492230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:24.492443+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:25.492716+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:26.492889+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:27.493148+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:28.493947+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:29.494250+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:30.494553+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:31.494824+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:32.495027+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:33.495300+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:34.495508+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7626752 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:35.495683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:36.496019+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:37.496215+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:38.496464+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:39.496636+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:40.496855+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:41.497024+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:42.497298+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:43.497529+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:44.497759+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:45.497948+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:46.498290+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:47.498581+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:48.498849+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:49.499090+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:50.499351+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:51.499554+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:52.499775+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:53.499918+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88842240 unmapped: 7618560 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:54.500220+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:55.500452+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:56.500717+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:57.500924+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:58.501169+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:59.501345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:00.501577+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:01.501768+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:02.502031+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:03.502242+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:04.502442+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:05.502607+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:06.502817+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:07.503077+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:08.503345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:09.503497+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:10.503685+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:11.503895+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:12.504183+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 7610368 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:13.504382+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:14.504530+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:15.504739+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:16.504938+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:17.505156+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:18.505351+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:19.505475+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:20.505670+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:21.505907+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:22.506231+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:23.506421+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:24.506585+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:25.506792+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:26.506960+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:27.507194+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:28.507425+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:29.507658+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:30.507905+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:31.508062+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:32.508240+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:33.508420+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 7602176 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:34.508548+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 7593984 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:35.508697+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 7593984 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:36.508876+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 7593984 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:37.509251+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:38.509517+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:39.509672+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:40.509816+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:41.509946+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:42.510131+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:43.510291+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:44.510415+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:45.510559+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:46.510700+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:47.510833+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:48.511055+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:49.511178+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:50.511361+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:51.511494+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 7585792 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:52.511630+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:53.511834+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:54.511974+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:55.512101+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:56.512282+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:57.512447+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:58.512600+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:59.512741+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:00.512912+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:01.513055+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:02.513195+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:03.513577+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:04.513705+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:05.513864+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:06.513988+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:07.514173+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:08.514404+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:09.514573+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:10.514722+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:11.514929+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 7577600 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:12.515081+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:13.515253+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:14.515427+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:15.515549+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:16.515673+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:17.515805+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:18.515945+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:19.516078+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:20.516216+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:21.516394+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:22.516552+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:23.516763+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:24.516885+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:25.517103+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:26.517294+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:27.517472+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:28.517656+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:29.517810+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:30.517957+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:31.518160+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:32.518323+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:33.518574+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 7569408 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:34.518731+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:35.518932+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:36.519047+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:37.519191+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:38.519353+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:39.519513+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:40.519649+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:41.519808+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:42.519925+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:43.520099+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:44.520274+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:45.520363+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:46.520493+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:47.520641+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:48.520857+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:49.521047+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:50.521243+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:51.521382+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:52.521463+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:53.521596+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:54.521704+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:55.521866+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:56.522003+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88899584 unmapped: 7561216 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:57.522104+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:58.522289+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:59.522419+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:00.522536+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:01.522690+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:02.522909+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:03.523189+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:04.523350+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:05.523499+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:06.523636+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:07.523758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:08.523896+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:09.524040+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:10.524163+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:11.524479+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:12.524626+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:13.524753+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 8716 writes, 35K keys, 8716 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 8716 writes, 1835 syncs, 4.75 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:14.524871+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:15.525007+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:16.525200+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:17.525349+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:18.525510+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:19.525631+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:20.525822+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:21.525974+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 7553024 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:22.526219+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x562642e07800 session 0x562643632700
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x562643ab5400
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc ms_handle_reset ms_handle_reset con 0x562646b74800
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/945587794
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/945587794,v1:192.168.122.100:6801/945587794]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: get_auth_request con 0x562646c80c00 auth_method 0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: mgrc handle_mgr_configure stats_period=5
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 7315456 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:23.526392+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x562646912000 session 0x562646f75340
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x562642e07800
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x56264406d800 session 0x56264652ca80
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x56264410b800
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:24.526526+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:25.526665+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:26.526848+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:27.526984+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:28.527193+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:29.527322+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:30.527525+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:31.527700+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:32.527827+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:33.527992+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:34.528164+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:35.528325+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:36.528469+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:37.528624+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:38.528821+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:39.528961+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:40.529063+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:41.529164+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:42.529339+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:43.529453+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:44.529577+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:45.529710+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:46.529869+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:47.529998+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:48.530091+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:49.530262+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:50.530394+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:51.530541+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:52.530678+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:53.530762+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:54.530956+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:55.531044+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:56.531237+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:57.531381+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:58.531539+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:59.531660+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:00.531802+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:01.531936+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:02.532099+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:03.532302+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:04.532455+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:05.532581+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:06.532772+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:07.532918+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:08.533039+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:09.533161+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:10.533394+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:11.533532+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:12.533663+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:13.533792+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:14.533991+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:15.534201+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:16.534341+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:17.534571+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:18.534753+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:19.534873+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:20.534982+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:21.535147+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:22.535323+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:23.535456+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89276416 unmapped: 7184384 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 ms_handle_reset con 0x5626436a0c00 session 0x562643633180
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x56264406d800
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:24.535614+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:25.535761+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:26.535888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:27.536080+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:28.536316+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:29.536446+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:30.536608+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:31.536763+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 597.591003418s of 599.401733398s, submitted: 90
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:32.536881+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 7446528 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:33.537027+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89030656 unmapped: 7430144 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:34.537168+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:35.537302+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:36.537455+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:37.537606+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:38.537805+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:39.537964+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:40.538098+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:41.538337+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:42.538588+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:43.538767+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:44.538936+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:45.539050+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:46.539217+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:47.539387+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:48.539568+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:49.539715+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:50.539863+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:51.539970+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:52.540251+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:53.540472+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:54.540608+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:55.540823+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:56.541030+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:57.541184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:58.541324+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:59.541485+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:00.541647+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:01.541798+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:02.541999+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:03.542157+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:04.542303+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:05.542497+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:06.542659+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:07.542801+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:08.542997+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:09.543158+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:10.543293+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:11.543497+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:12.543637+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:13.543791+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:14.543987+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:15.544141+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:16.544267+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:17.544442+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:18.544622+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:19.544739+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:20.544870+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:21.545009+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:22.545167+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:23.545330+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:24.545463+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:25.545595+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:26.545747+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:27.545874+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:28.546230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:29.546345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:30.546790+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:31.546958+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:32.547089+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:33.547239+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:34.547373+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:35.547540+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:36.547718+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:37.547855+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:38.548091+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:39.548327+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:40.548539+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:41.548716+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:42.548824+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:43.548959+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:44.549099+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:45.549233+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:46.549409+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:47.549538+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:48.549701+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:49.549825+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:50.549965+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:51.550089+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:52.550258+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:53.550393+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:54.550523+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:55.550645+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:56.550774+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:57.550932+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:58.551107+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89047040 unmapped: 7413760 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:59.551299+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fce21000/0x0/0x4ffc00000, data 0x139e1f/0x20b000, compress 0x0/0x0/0x0, omap 0x14b5d, meta 0x2bbb4a3), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 7544832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:00.551459+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 7544832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:01.551571+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046980 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 7544832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:02.551720+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 7544832 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x562644508400
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:03.551922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 7372800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 133 handle_osd_map epochs [133,134], i have 134, src has [1,134]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 91.503730774s of 91.813667297s, submitted: 112
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:04.552145+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 7372800 heap: 96460800 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fce1b000/0x0/0x4ffc00000, data 0x13b9c3/0x20f000, compress 0x0/0x0/0x0, omap 0x14e17, meta 0x2bbb1e9), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _renew_subs
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:05.552286+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89333760 unmapped: 23912448 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _renew_subs
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 136 ms_handle_reset con 0x562644508400 session 0x56264403e540
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: get_auth_request con 0x56264406c000 auth_method 0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:06.552598+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103369 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 23904256 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:07.552727+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 23904256 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: handle_auth_request added challenge on 0x56264406cc00
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:08.552896+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89473024 unmapped: 23773184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x93f1a9/0xa17000, compress 0x0/0x0/0x0, omap 0x152fb, meta 0x2bbad05), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:09.553048+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 19152896 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:10.553216+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _renew_subs
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 137 ms_handle_reset con 0x56264406cc00 session 0x5626462241c0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fbe15000/0x0/0x4ffc00000, data 0x113f1a9/0x1217000, compress 0x0/0x0/0x0, omap 0x152fb, meta 0x2bbad05), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 23797760 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _renew_subs
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:11.553367+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172075 data_alloc: 218103808 data_used: 8040
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 23797760 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:12.553507+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 89448448 unmapped: 23797760 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:13.553721+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:14.553847+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:15.554045+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:16.554274+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:17.554449+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:18.554672+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:19.554818+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:20.555022+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _renew_subs
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:21.555198+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:22.555339+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:23.557878+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:24.558014+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:25.558190+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:26.558359+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:27.558522+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:28.558710+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:29.558840+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:30.558975+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:31.559239+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:32.559462+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:33.559597+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:34.559733+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:35.559922+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:36.560078+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:37.560210+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:38.560387+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:39.560563+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:40.560743+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:41.560860+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:42.561009+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:43.561203+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:44.561364+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:45.561478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:46.561623+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:47.561771+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:48.561957+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:49.562093+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:50.562241+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:51.562409+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:52.562568+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:53.562756+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:54.562888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:55.563024+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:56.563182+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:57.563319+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:58.563478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:59.563623+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:00.563756+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:01.563970+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:02.564088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:03.564260+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:04.564374+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:05.564499+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:06.564665+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:07.564888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:08.565066+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:09.565230+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:10.565363+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:11.565512+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:12.565643+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:13.565803+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:14.565929+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:15.566088+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:16.566251+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:17.566393+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:18.566601+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:19.566748+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:20.566939+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:21.567074+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:22.567219+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:23.567386+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:24.567529+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:25.567669+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:26.567811+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:27.567979+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:28.568184+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:29.568363+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:30.568492+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:31.568625+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:32.568797+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:33.568939+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:34.569067+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:35.569190+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:36.569320+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:37.569434+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:38.569592+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:39.569727+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:40.569898+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:41.570051+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:42.570176+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:43.570298+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:44.570417+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:45.570558+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:46.570730+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:47.570888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:48.571086+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:49.571218+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:50.571357+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:51.571485+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:52.571618+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:53.582792+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:54.582944+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:55.583150+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:56.583284+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:57.583445+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:58.583602+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:59.583734+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:00.583872+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:01.584020+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:02.584206+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:03.584383+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:04.584497+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:05.584619+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:06.584744+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:07.584909+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:08.585083+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:09.585256+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:10.585433+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:11.585635+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:12.585780+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:13.585964+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:14.586107+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:15.586346+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:16.586490+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:17.586655+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:18.586804+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:19.586964+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:20.587136+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:21.587278+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:22.587529+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:23.587751+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:24.587997+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:25.588212+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:26.605801+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90497024 unmapped: 22749184 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:27.606018+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:28.606159+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:29.606329+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:30.606461+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:31.607259+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:32.607479+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:33.607627+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:34.607847+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:35.608018+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:36.608211+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:37.608357+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:38.608607+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:39.608826+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:40.609015+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:41.609233+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:42.609426+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:43.609558+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:44.609708+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:45.609835+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 22740992 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:46.609951+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:47.610055+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:48.610257+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:49.610419+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:50.610535+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:51.610831+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:52.610954+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:53.611105+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:54.611331+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:55.611492+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:56.611638+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:57.611770+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:58.611959+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:59.612165+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:00.612303+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:01.612478+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:02.612683+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:03.612810+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:04.612944+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:05.613094+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:06.613288+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:07.613459+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:08.613620+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:09.613768+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:10.614039+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 22732800 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:11.614211+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:12.614368+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:13.614516+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:14.614668+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:15.614822+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:16.614952+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:17.615091+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:18.615277+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:19.615352+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:20.615476+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:21.615602+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:22.615732+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:23.615865+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:24.616004+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:25.616102+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:26.616297+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:27.616419+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:28.616582+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:29.616695+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:30.616888+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:31.617174+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 22724608 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:32.617318+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:33.617415+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:34.617583+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:35.617728+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:36.617883+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:37.617998+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:38.618150+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:39.618270+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:40.618436+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:41.618581+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:42.618853+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:43.619079+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:44.619345+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:45.619652+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:46.619834+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:47.620098+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:48.620363+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:49.620492+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:50.620675+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:51.620830+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:52.620956+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90529792 unmapped: 22716416 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:53.621157+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:54.621380+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:55.621614+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:56.621735+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:57.621918+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:58.622066+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:59.622225+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:00.622387+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:01.622571+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:02.622758+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:03.622919+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:04.623064+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:05.623226+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:06.623428+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:07.623555+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:08.623749+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:09.623898+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:10.625501+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:11.625673+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90537984 unmapped: 22708224 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:12.625837+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:13.626031+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:14.626245+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:15.626420+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:16.626549+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:17.626723+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:18.626908+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:19.626991+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:20.627092+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:21.627246+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:22.627366+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:23.627471+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:24.627591+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:25.627720+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:26.627832+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:27.627947+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:28.628203+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:29.676411+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:30.676572+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:31.676737+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:32.676851+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:33.676967+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:34.677151+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:35.677279+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90546176 unmapped: 22700032 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:36.677412+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 90660864 unmapped: 22585344 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'config diff' '{prefix=config diff}'
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'config show' '{prefix=config show}'
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:37.677516+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:09 compute-0 ceph-osd[87070]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:09 compute-0 ceph-osd[87070]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174833 data_alloc: 218103808 data_used: 8625
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 91185152 unmapped: 22061056 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: tick
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_tickets
Jan 31 06:32:09 compute-0 ceph-osd[87070]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:38.677645+0000)
Jan 31 06:32:09 compute-0 ceph-osd[87070]: prioritycache tune_memory target: 4294967296 mapped: 91226112 unmapped: 22020096 heap: 113246208 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:09 compute-0 ceph-osd[87070]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fb99c000/0x0/0x4ffc00000, data 0x15b2803/0x168e000, compress 0x0/0x0/0x0, omap 0x15840, meta 0x2bba7c0), peers [0,2] op hist [])
Jan 31 06:32:09 compute-0 ceph-osd[87070]: do_command 'log dump' '{prefix=log dump}'
Jan 31 06:32:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} v 0)
Jan 31 06:32:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:09 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 06:32:09 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382554212' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 06:32:09 compute-0 rsyslogd[1004]: imjournal from <np0005603492:ceph-osd>: begin to drop messages due to rate-limiting
Jan 31 06:32:10 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:10 compute-0 ceph-mon[75251]: pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:10 compute-0 ceph-mon[75251]: from='client.14548 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:10 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/653434397' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 06:32:10 compute-0 ceph-mon[75251]: from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:10 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4150817308' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 06:32:10 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} v 0)
Jan 31 06:32:10 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:10 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 06:32:10 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/215355256' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 06:32:10 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 06:32:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826176174' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='client.14554 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2382554212' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/215355256' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2826176174' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 06:32:11 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297866269' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 06:32:11 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:12 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 06:32:12 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222542219' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 06:32:12 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:12 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:12 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14578 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:12 compute-0 ceph-mon[75251]: from='client.14564 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:12 compute-0 ceph-mon[75251]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:12 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3297866269' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 06:32:12 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3222542219' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 06:32:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 31 06:32:13 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041943892' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 06:32:13 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:13 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 31 06:32:13 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089953822' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:48.558324+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 4 last_log 155 sent 151 num 4 unsent 4 sending 4
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:17.558618+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.14 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:17.569357+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.14 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:18.520945+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:18.531541+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 155)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:17.558618+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.14 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:17.569357+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.14 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:18.520945+0000 osd.0 (osd.0) 154 : cluster [DBG] 10.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:18.531541+0000 osd.0 (osd.0) 155 : cluster [DBG] 10.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1335296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:49.558533+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980337 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:50.558683+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:51.558839+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1327104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:52.558970+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:53.559153+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.656091690s of 10.928477287s, submitted: 8
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:54.559264+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:24.479565+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:24.490133+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 157)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:24.479565+0000 osd.0 (osd.0) 156 : cluster [DBG] 10.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:24.490133+0000 osd.0 (osd.0) 157 : cluster [DBG] 10.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982750 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:55.559414+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:25.517933+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:25.528524+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 159)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:25.517933+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:25.528524+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 1318912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:56.559587+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:26.517843+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.1f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:26.528408+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.1f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 161)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:26.517843+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.1f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:26.528408+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.1f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 1294336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:57.559791+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 1 last_log 162 sent 161 num 1 unsent 1 sending 1
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:27.557400+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.4 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 162)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:27.557400+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.4 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1277952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:58.560189+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 1 last_log 163 sent 162 num 1 unsent 1 sending 1
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:27.567893+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.4 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 163)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:27.567893+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.4 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 1269760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:58:59.560332+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992402 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 1269760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:00.560479+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:29.601105+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:29.615437+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 165)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:29.601105+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:29.615437+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 1261568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:01.560784+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 1261568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:02.560919+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 1253376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:03.561059+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:33.540331+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.15 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:33.554376+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.15 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 167)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:33.540331+0000 osd.0 (osd.0) 166 : cluster [DBG] 10.15 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:33.554376+0000 osd.0 (osd.0) 167 : cluster [DBG] 10.15 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 1245184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:04.561263+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994817 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:05.561462+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1236992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:06.561640+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.096636772s of 12.127861977s, submitted: 12
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1228800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:07.561794+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:36.607409+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.6 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T05:59:36.621525+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.6 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 169)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:36.607409+0000 osd.0 (osd.0) 168 : cluster [DBG] 8.6 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T05:59:36.621525+0000 osd.0 (osd.0) 169 : cluster [DBG] 8.6 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:08.562045+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 1220608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:09.562213+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997228 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:10.562372+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:11.562527+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 1212416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:12.562648+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:13.562869+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1204224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:14.563063+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997228 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:15.563246+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:16.563408+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 1187840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:17.563535+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:18.563693+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:19.563894+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997228 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:20.564053+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:21.564227+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1179648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:22.564379+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:23.564581+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1171456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:24.564795+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997228 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:25.564945+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:26.565164+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1163264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:27.565436+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:28.565567+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1155072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:29.565725+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997228 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:30.565863+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.180316925s of 24.190004349s, submitted: 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:31.565984+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:00.797417+0000 osd.0 (osd.0) 170 : cluster [DBG] 10.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:00.811554+0000 osd.0 (osd.0) 171 : cluster [DBG] 10.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1146880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:32.566218+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1138688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 171)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:00.797417+0000 osd.0 (osd.0) 170 : cluster [DBG] 10.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:00.811554+0000 osd.0 (osd.0) 171 : cluster [DBG] 10.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:33.566501+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1138688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:34.566657+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999641 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:35.566819+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:36.566971+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1114112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:37.567099+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:06.839878+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:06.857261+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 173)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:06.839878+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.f scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:06.857261+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.f scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:38.567326+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 1105920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:39.567479+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:40.567659+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:09.781016+0000 osd.0 (osd.0) 174 : cluster [DBG] 10.e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:09.795264+0000 osd.0 (osd.0) 175 : cluster [DBG] 10.e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004465 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1097728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 175)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:09.781016+0000 osd.0 (osd.0) 174 : cluster [DBG] 10.e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:09.795264+0000 osd.0 (osd.0) 175 : cluster [DBG] 10.e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:41.567856+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:42.567996+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:43.568174+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1089536 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.567787170s of 12.898587227s, submitted: 6
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:44.568354+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:13.695699+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.1c scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:13.738080+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.1c scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:45.568547+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006878 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1073152 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 177)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:13.695699+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.1c scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:13.738080+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.1c scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:46.568680+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:15.752485+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:15.777202+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 179)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:15.752485+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:15.777202+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:47.568835+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1064960 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:48.569005+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1056768 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:49.569196+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:18.745968+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.3 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:18.788314+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.3 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1048576 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 181)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:18.745968+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.3 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:18.788314+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.3 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:50.569427+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011702 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:51.569651+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1040384 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:52.569808+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:53.569972+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:22.760240+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:22.792008+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:54.570148+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 183)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:22.760240+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.1d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:22.792008+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.1d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 1032192 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:55.570336+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014115 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 1024000 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:56.570500+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 1024000 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:57.570653+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 1024000 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:58.570902+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T05:59:59.571030+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 1015808 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:00.571159+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014115 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 1007616 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.460588455s of 17.106412888s, submitted: 8
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:01.571292+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:30.802511+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:30.848383+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 999424 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 185)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:30.802511+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.1 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:30.848383+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.1 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:02.571505+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:31.788841+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:31.827705+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 974848 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 187)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:31.788841+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.d scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:31.827705+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.d scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:03.571753+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:04.571865+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:33.822222+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:33.857492+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 966656 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 189)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:33.822222+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.9 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:33.857492+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.9 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:05.572055+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:34.792193+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:34.823988+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023761 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 191)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:34.792193+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.16 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:34.823988+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.16 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:06.572271+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:35.765614+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:35.793728+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 193)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:35.765614+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.b scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:35.793728+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.b scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:07.572466+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 950272 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:08.572600+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:37.759941+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.5 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:37.802250+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.5 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 195)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:37.759941+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.5 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:37.802250+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.5 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:09.572776+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 942080 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:10.572897+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:39.709126+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.11 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:39.747967+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.11 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030996 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 197)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:39.709126+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.11 scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:39.747967+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.11 scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:11.573187+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 933888 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:12.573392+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:13.573566+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.707901001s of 12.900356293s, submitted: 14
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 925696 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:14.573687+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:43.702856+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  will send 2026-01-31T06:00:43.738054+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:15.573822+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client handle_log_ack log(last 199)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:43.702856+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.1e scrub starts
Jan 31 06:32:13 compute-0 ceph-osd[86016]: log_client  logged 2026-01-31T06:00:43.738054+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.1e scrub ok
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:16.573926+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 909312 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:17.574037+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:18.574210+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:19.574471+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:20.574584+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:21.574721+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 901120 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:22.574889+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:23.575405+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 892928 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:24.575560+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 884736 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:25.575731+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 884736 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:26.575851+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:27.575986+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:28.576159+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 876544 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:29.576311+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 868352 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:30.576492+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 868352 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:31.576649+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:32.576791+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:33.576995+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 860160 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:34.577146+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:35.577279+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:36.577450+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:37.577595+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:38.578900+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 827392 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:39.579017+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:40.579141+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:41.579270+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 802816 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:42.579428+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 802816 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:43.579660+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:44.579875+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:45.580048+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:46.580195+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:47.580345+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:48.580520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:49.580692+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:50.580823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:51.580959+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:52.581101+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:53.581395+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:54.581532+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:55.581721+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:56.581846+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:57.581959+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:58.582166+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:00:59.582321+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 729088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:00.582467+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 720896 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:01.582601+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 720896 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:02.582729+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:03.582882+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:04.583016+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:05.583177+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:06.583355+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:07.583488+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:08.583627+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:09.583818+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:10.583968+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:11.584158+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:12.584276+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:13.584417+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:14.584552+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:15.584682+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:16.584789+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:17.584957+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:18.585090+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:19.585270+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:20.585406+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:21.585530+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:22.585650+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 630784 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:23.585862+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 630784 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:24.586023+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:25.586189+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:26.586341+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:27.586475+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:28.586665+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:29.586799+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 606208 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:30.586934+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 606208 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:31.587059+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 606208 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:32.587193+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:33.587387+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:34.587497+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 581632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:35.587680+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:36.587802+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:37.587918+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:38.588057+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:39.588210+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:40.588351+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:41.588506+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:42.588644+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:43.588847+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:44.589032+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:45.589448+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:46.589627+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:47.589771+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:48.589916+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:49.590062+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:50.590187+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:51.590307+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:52.590504+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:53.590684+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:54.590851+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:55.591010+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:56.591186+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:57.591743+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:58.591903+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:01:59.592246+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:00.592441+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:01.592813+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:02.599180+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 483328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:03.599528+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 483328 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:04.599640+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:05.599936+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:06.600086+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:07.600261+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:08.600459+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:09.600618+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:10.600772+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:11.600953+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:12.601170+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:13.601397+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:14.601561+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 434176 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:15.601671+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 425984 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:16.601831+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 417792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:17.601955+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 417792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:18.602093+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 417792 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:19.602249+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 409600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:20.602425+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 409600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:21.602586+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 409600 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:22.602791+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 401408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:23.603012+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 401408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:24.603157+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 393216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:25.603294+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 393216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:26.608388+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 393216 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:27.608531+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 385024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:28.608673+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 385024 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:29.608828+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 376832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:30.608988+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 376832 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:31.609213+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 368640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:32.609410+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 368640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:33.609617+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 368640 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:34.609785+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:35.609954+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:36.610085+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:37.610196+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:38.610339+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:39.610476+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:40.610659+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:41.610796+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 327680 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:42.610924+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 327680 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:43.611070+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 319488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:44.611171+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 319488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:45.611284+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 319488 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:46.611418+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 311296 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:47.611596+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 303104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:48.611854+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 303104 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:49.611991+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 294912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:50.612173+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 294912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:51.612283+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 286720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:52.612403+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 286720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:53.612569+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 294912 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:54.612751+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 286720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:55.612893+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 286720 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:56.613108+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 278528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:57.613297+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 278528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:58.613429+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 278528 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:02:59.613625+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:00.613803+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:01.613945+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:02.614101+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:03.614286+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:04.614382+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 262144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:05.614675+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 262144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:06.614835+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 262144 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:07.614981+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 253952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:08.615202+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 253952 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:09.615328+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 245760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:10.615458+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 245760 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:11.615583+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:12.615713+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:13.615894+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 237568 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:14.616054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:15.616208+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 229376 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:16.616353+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:17.616830+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 221184 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:18.617037+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 212992 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:19.617241+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:20.617421+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:21.617671+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 204800 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:22.617837+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:23.618029+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 196608 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:24.618195+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:25.618366+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:26.618540+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 188416 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:27.618737+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:28.618836+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:29.618985+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:30.619186+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:31.619383+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:32.619497+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:33.619701+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:34.619836+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:35.619984+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:36.620134+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 172032 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:37.620268+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 163840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:38.620386+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 163840 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:39.620525+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:40.620648+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:41.620794+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:42.620989+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:43.621207+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 147456 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:44.621322+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 139264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:45.621431+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 139264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:46.621587+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 139264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:47.621731+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:48.621883+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:49.622054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:50.622236+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 131072 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:51.622409+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:52.622564+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:53.622789+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:54.622905+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:55.623084+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:56.623184+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:57.623566+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:58.623691+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:03:59.623840+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:00.623999+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:01.624186+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:02.624368+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:03.624585+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:04.624715+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:05.624823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:06.624952+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 73728 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:07.625054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 57344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:08.625164+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 57344 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5604 writes, 25K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5604 writes, 861 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5604 writes, 25K keys, 5604 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s
                                           Interval WAL: 5604 writes, 861 syncs, 6.51 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:09.625282+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:10.625469+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:11.625626+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 1040384 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:12.625786+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:13.626234+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:14.626418+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 1032192 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:15.626617+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:16.626761+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 1024000 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:17.626880+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:18.626989+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 1015808 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:19.627172+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 1007616 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:20.627301+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 999424 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:21.627460+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 999424 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:22.627644+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:23.627832+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 991232 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:24.627997+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:25.628291+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:26.628450+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:27.628587+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:28.628911+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:29.629054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 974848 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:30.629566+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:31.629974+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 966656 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:32.630164+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:33.630431+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 950272 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:34.630896+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:35.631315+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:36.631549+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:37.631999+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:38.632172+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:39.632289+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 933888 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:40.632431+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:41.632574+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:42.632685+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 925696 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:43.632820+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:44.632998+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:45.633158+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 917504 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:46.633290+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 909312 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:47.633421+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 909312 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:48.633618+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 901120 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:49.633818+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 901120 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:50.634014+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 892928 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:51.634301+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 892928 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:52.634509+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 892928 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:53.634650+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:54.634853+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:55.635057+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 884736 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:56.635392+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:57.636068+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 876544 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:58.636520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:04:59.637178+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:00.637346+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 868352 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:01.637514+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:02.637663+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 860160 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:03.637823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 851968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:04.638685+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 851968 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:05.639374+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:06.639841+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 843776 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:07.641195+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 835584 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:08.641518+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 827392 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:09.642018+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 819200 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:10.642592+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:11.643192+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 811008 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:12.643638+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:13.643915+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:14.644095+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:15.644273+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:16.644456+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:17.644622+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:18.644760+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:19.644908+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:20.645192+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:21.645388+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:22.645520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:23.645677+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:24.645799+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:25.645955+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:26.646094+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:27.646264+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 753664 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:28.646402+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 753664 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:29.646532+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 753664 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:30.646659+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:31.646863+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 318.243804932s of 318.361785889s, submitted: 2
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 729088 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:32.647029+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:33.647190+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:34.647350+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:35.647538+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:36.647665+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033481 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1630208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:37.647795+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1630208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:38.647970+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1630208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:39.648188+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1630208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:40.648322+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 1630208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:41.648544+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1622016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:42.648669+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1622016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:43.648846+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 1622016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:44.648980+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 1613824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:45.649160+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 1613824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:46.649289+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 1613824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:47.649416+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1605632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:48.649528+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1605632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:49.649670+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1597440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:50.649805+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1597440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:51.649994+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1597440 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:52.650178+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1589248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:53.650366+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1589248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:54.650527+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1572864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:55.650691+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1572864 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:56.650863+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1564672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:57.651107+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1564672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:58.651313+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 1556480 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:05:59.651457+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1548288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:00.651614+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1548288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:01.651757+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1540096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:02.651956+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1540096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:03.652152+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1540096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:04.652298+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1531904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:05.652464+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1531904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:06.652618+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1523712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:07.652747+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1523712 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:08.652882+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1515520 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:09.652977+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1507328 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:10.653086+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1507328 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:11.653171+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1507328 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:12.653292+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1507328 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:13.653480+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 1499136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:14.653612+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1490944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:15.653753+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1490944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:16.653866+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1490944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:17.654024+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1482752 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:18.654172+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1482752 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:19.654248+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1482752 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:20.654394+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:21.654540+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:22.654685+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:23.654853+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:24.655015+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:25.655166+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:26.655344+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:27.655489+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:28.655628+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:29.655758+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:30.655901+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:31.656019+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:32.656174+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:33.656332+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:34.656457+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:35.656575+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:36.656680+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1474560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:37.656797+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1466368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:38.656926+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1458176 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:39.657027+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1458176 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:40.657207+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:41.657311+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:42.657424+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:43.657565+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:44.657699+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:45.657855+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:46.657992+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:47.658102+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1449984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:48.658191+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1441792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:49.658399+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1441792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:50.658553+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1441792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:51.658718+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1441792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:52.658930+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1441792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:53.659153+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:54.659301+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:55.659516+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:56.659731+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:57.659854+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:58.659982+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:06:59.660096+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:00.660161+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:01.660279+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1433600 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:02.660433+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1425408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:03.660605+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1417216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:04.660818+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1417216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:05.660965+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1417216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:06.661205+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1417216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:07.661335+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1417216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:08.661511+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1409024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:09.661680+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1409024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:10.661833+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1409024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:11.661990+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1409024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:12.662147+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1409024 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:13.662346+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:14.662471+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:15.662625+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:16.662798+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:17.662938+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:18.663110+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:19.663270+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:20.663488+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:21.663625+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1400832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:22.663786+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1392640 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:23.664408+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1392640 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:24.664579+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1384448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:25.664821+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1384448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1384448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:26.700198+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1384448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:27.700376+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:28.700492+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1384448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:29.700626+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:30.700757+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:31.700880+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:32.700998+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:33.701159+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:34.701310+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:35.701431+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:36.701567+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:37.701702+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:38.701827+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1376256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:39.701959+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:40.702104+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:41.702280+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:42.702384+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:43.702542+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:44.702798+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:45.702956+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:46.703160+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:47.703520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:48.703674+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:49.703825+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:50.703996+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:51.704144+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1368064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:52.704262+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:53.704423+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:54.704531+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:55.704638+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:56.704838+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:57.704991+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:58.705125+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:07:59.705290+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:00.705442+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:01.705670+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:02.705882+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:03.706100+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1359872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:04.706296+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:05.706422+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:06.706556+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:07.706699+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:08.706863+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:09.706991+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:10.707178+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:11.707347+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:12.707536+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:13.707710+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1351680 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:14.707832+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:15.708026+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:16.708183+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:17.708309+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:18.708440+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:19.708575+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:20.708717+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:21.708877+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:22.709191+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:23.709387+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1343488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:24.709566+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1335296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:25.709775+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1335296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:26.709903+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1335296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:27.710055+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1335296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:28.710187+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1335296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:29.710363+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:30.710529+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:31.710662+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:32.710787+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:33.710992+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:34.711157+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:35.711322+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1327104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:36.711476+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:37.711604+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:38.711755+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:39.711901+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:40.712146+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:41.712269+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:42.712377+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:43.712711+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1318912 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:44.712870+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:45.713070+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:46.713205+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:47.713392+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:48.713554+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:49.713725+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:50.713915+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:51.714067+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:52.714283+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:53.714540+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:54.714710+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:55.714919+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:56.715063+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:57.715250+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:58.715440+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:08:59.715621+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:00.715744+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:01.715887+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:02.716074+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:03.716543+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:04.716722+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:05.716897+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:06.717042+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:07.717165+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:08.717283+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1302528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:09.717417+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1294336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:10.717548+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1286144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:11.717675+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1286144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:12.717826+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1277952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:13.718081+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 1277952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:14.718286+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:15.718455+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:16.718649+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:17.718779+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:18.718945+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:19.719149+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:20.719266+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 1269760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: mgrc ms_handle_reset ms_handle_reset con 0x55b65380e000
Jan 31 06:32:13 compute-0 ceph-osd[86016]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/945587794
Jan 31 06:32:13 compute-0 ceph-osd[86016]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/945587794,v1:192.168.122.100:6801/945587794]
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: get_auth_request con 0x55b655efb000 auth_method 0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: mgrc handle_mgr_configure stats_period=5
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:21.719427+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:22.719587+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:23.731303+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 ms_handle_reset con 0x55b65326a400 session 0x55b6520bd340
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b6543b6c00
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:24.731476+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:25.731631+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:26.731798+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:27.731933+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:28.732059+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:29.732183+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:30.732343+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:31.732455+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:32.732588+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:33.732817+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:34.733018+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:35.733208+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:36.733370+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:37.733513+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:38.733662+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:39.733823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:40.733935+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:41.734076+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:42.734192+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:43.734486+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:44.734630+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:45.734781+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:46.734915+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:47.735191+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:48.735350+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:49.735531+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:50.735690+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:51.735830+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:52.736103+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:53.736263+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:54.736399+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:55.736626+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:56.737217+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:57.737365+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:58.737743+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:09:59.737903+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:00.738082+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:01.738251+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:02.738408+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:03.738641+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:04.738767+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:05.738971+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:06.739102+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:07.739332+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:08.739610+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:09.739798+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:10.739951+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:11.740104+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:12.740298+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:13.740504+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:14.740674+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:15.740801+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:16.740972+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:17.741155+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:18.741361+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 663552 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:19.741537+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 663552 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:20.742095+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 663552 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:21.742236+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 663552 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:22.742443+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 663552 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:23.742616+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:24.742867+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:25.743004+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:26.743182+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:27.743350+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:28.743507+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:29.743677+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:30.743825+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:31.743963+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b6543b6000
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:32.744133+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033409 data_alloc: 218103808 data_used: 4673
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 298.933898926s of 301.439575195s, submitted: 106
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:33.744298+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:34.744422+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:35.744561+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:36.744814+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:37.744942+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:38.745064+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:39.745205+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:40.745350+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:41.745517+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:42.745684+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 475136 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:43.745842+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:44.745972+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:45.746095+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:46.746184+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:47.746310+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 466944 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:48.746474+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:49.746647+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:50.746792+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:51.746928+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:52.747088+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:53.747263+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:54.747382+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:55.747512+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:56.747671+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:57.747811+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:58.747922+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 450560 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:10:59.748054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:00.748186+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:01.748297+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:02.748425+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:03.748568+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:04.748742+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:05.748911+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:06.749017+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:07.749189+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:08.749344+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:09.749534+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:10.749680+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:11.749851+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:12.749990+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:13.750178+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76095488 unmapped: 442368 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:14.750295+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 434176 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:15.750426+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:16.750554+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:17.750691+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:18.750835+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:19.751519+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:20.751679+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:21.751818+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:22.751971+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:23.752136+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:24.752299+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:25.752454+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:26.752597+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:27.752709+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:28.752829+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 425984 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:29.752966+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:30.753069+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:31.753179+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:32.753305+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:33.753473+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:34.753612+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:35.753763+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76120064 unmapped: 417792 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:36.753894+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:37.754024+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:38.754163+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:39.754305+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:40.754659+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 401408 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:41.754851+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 393216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:42.754965+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 393216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:43.755148+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 393216 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:44.755262+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:45.755392+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:46.755536+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:47.755659+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:48.755812+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:49.755953+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:50.756144+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:51.756292+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:52.756407+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:53.756626+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:54.756801+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:55.756985+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 376832 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:56.757185+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 360448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:57.757350+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 360448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:58.757616+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 360448 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:11:59.757810+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 352256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:00.758096+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 352256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:01.758284+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 352256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:02.758469+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 352256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:03.758715+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 352256 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:04.759265+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:05.759437+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:06.759624+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:07.759744+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:08.759869+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:09.760029+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:10.760216+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:11.760515+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:12.760631+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:13.760881+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:14.761083+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 344064 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:15.761246+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 335872 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:16.761473+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:17.761799+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:18.761988+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:19.762192+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:20.762418+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:21.762583+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:22.762715+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:23.762908+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76218368 unmapped: 319488 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:24.763079+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:25.763239+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:26.763443+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:27.763593+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:28.763771+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 311296 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:29.763924+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:30.764068+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:31.764190+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:32.764347+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:33.764519+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:34.764732+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:35.764944+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 303104 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:36.765224+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:37.765423+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:38.765598+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:39.765709+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:40.765822+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:41.765938+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:42.766153+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:43.766309+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:44.767103+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 286720 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:45.767290+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:46.767442+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:47.767638+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:48.767783+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 278528 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:49.767928+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:50.768078+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:51.768248+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:52.768438+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:53.768717+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:54.768944+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:55.769201+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:56.769348+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:57.769525+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:58.769631+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 270336 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:12:59.769867+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:00.770046+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:01.770224+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:02.770425+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:03.770642+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:04.770844+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:05.771057+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:06.771296+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:07.772216+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:08.772375+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:09.772584+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:10.772712+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:11.772829+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:12.773071+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:13.773466+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 262144 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:14.773718+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:15.773912+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:16.774086+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:17.774256+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:18.774376+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:19.774475+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:20.774615+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:21.774843+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:22.775008+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:23.775263+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:24.775594+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:25.775802+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:26.776002+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:27.776255+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:28.776449+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:29.776663+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:30.776951+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:31.777287+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:32.777619+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:33.777825+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 253952 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:34.778255+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:35.778654+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:36.779013+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:37.779341+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:38.779636+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:39.780033+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:40.780410+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:41.780751+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:42.790997+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:43.791264+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:44.791626+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:45.792016+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:46.792210+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:47.792364+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:48.792623+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76292096 unmapped: 245760 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:49.792831+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:50.793041+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:51.793182+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:52.793380+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:53.793598+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:54.793764+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:55.793953+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:56.794174+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:57.794338+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 237568 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:58.794536+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:13:59.794693+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:00.794910+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:01.795071+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:02.795268+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:03.795516+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:04.795739+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:05.795969+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:06.796157+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:07.796365+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 229376 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:08.796609+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5852 writes, 25K keys, 5852 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5852 writes, 985 syncs, 5.94 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b652097a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b6520978d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:09.796787+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:10.796959+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:11.797202+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:12.797391+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:13.797579+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:14.797924+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:15.798232+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:16.798416+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:17.798587+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:18.798770+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:19.798936+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:20.799208+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:21.799440+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:22.799655+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:23.799865+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:24.800043+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:25.800217+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:26.800433+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:27.800607+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:28.800791+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:29.800987+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:30.801215+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:31.801398+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:32.801577+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:33.801770+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:34.802007+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:35.802233+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:36.802402+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:37.802644+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 139264 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:38.802832+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:39.803002+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:40.803202+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:41.803368+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:42.803551+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:43.803753+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:44.803908+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:45.804101+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:46.804304+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:47.804489+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:48.804702+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:49.804926+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:50.805254+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:51.805422+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:52.805628+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:53.805883+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:54.806084+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:55.806282+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:56.806438+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:57.806630+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:58.806803+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:14:59.806926+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:00.807110+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:01.807301+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:02.807422+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:03.807596+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:04.807795+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:05.808023+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:06.808183+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:07.808323+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 131072 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:08.808473+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:09.808618+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:10.808731+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:11.808852+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:12.808974+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:13.809187+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 114688 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:14.809344+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:15.809478+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:16.809639+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:17.809812+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:18.809974+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:19.810176+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:20.810417+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:21.810552+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:22.810706+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:23.810899+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:24.811173+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:25.811361+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:26.811526+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:27.811675+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:28.811816+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:29.811965+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:30.812215+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:31.812396+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:32.812558+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.206604004s of 299.461029053s, submitted: 18
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:33.812773+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76431360 unmapped: 106496 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:34.812990+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76464128 unmapped: 73728 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:35.813189+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 8192 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:36.813303+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 76529664 unmapped: 1056768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:37.813413+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 1048576 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:38.813578+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:39.813783+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:40.813952+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:41.814206+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:42.814401+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:43.814900+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:44.815182+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:45.815332+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:46.815524+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:47.815703+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:48.815857+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:49.816057+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:50.816156+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:51.816313+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:52.816440+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:53.816622+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:54.816811+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:55.817049+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:56.817265+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:57.817418+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:58.817603+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:15:59.817746+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:00.817942+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:01.818250+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:02.818408+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:03.818594+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:04.818796+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:05.819013+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:06.819171+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:07.819354+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:08.819502+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:09.819661+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:10.819815+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:11.820023+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:12.820171+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:13.820619+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:14.821724+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:15.821958+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:16.822101+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:17.822657+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:18.823436+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:19.823609+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:20.823934+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:21.824104+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:22.824926+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:23.825550+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:24.826007+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000021s
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:25.826328+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:26.826525+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:27.827008+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:28.827539+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1056768 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:29.827784+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:30.828045+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:31.828246+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:32.828427+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:33.828734+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:34.829033+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:35.829299+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:36.829462+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:37.829602+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:38.829820+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:39.830052+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:40.830269+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:41.830443+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:42.830575+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:43.830967+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:44.831084+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:45.831241+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:46.831421+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:47.831622+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:48.831823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:49.832028+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:50.832157+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:51.832290+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:52.832456+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:53.832656+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:54.832788+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:55.833014+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:56.833155+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:57.833324+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:58.833459+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:16:59.833641+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:00.833805+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:01.833953+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:02.834168+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:03.834359+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:04.834541+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:05.834705+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:06.834840+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:07.835005+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:08.835134+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:09.835259+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:10.835412+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:11.835580+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:12.835744+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:13.835910+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:14.836079+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:15.836247+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:16.836455+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:17.836606+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:18.836767+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:19.836934+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:20.837091+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:21.837229+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:22.837441+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:23.837693+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:24.837823+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:25.837961+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:26.838086+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:27.838201+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:28.838349+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:29.838506+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:30.838696+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:31.838852+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:32.839063+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:33.839271+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:34.839424+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:35.839639+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:36.839765+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:37.840045+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:38.840271+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:39.840483+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:40.840672+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:41.840858+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:42.841040+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:43.841226+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:44.841358+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:45.841501+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:46.841654+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:47.841775+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:48.841931+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:49.842175+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:50.842357+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:51.842489+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1048576 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:52.842614+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:53.842836+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:54.843046+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:55.843195+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:56.843358+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:57.843507+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:58.843629+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:17:59.843788+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:00.843994+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:01.844221+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:02.844427+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:03.844661+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:04.844810+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:05.844948+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:06.845207+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:07.845374+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:08.845519+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:09.845673+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:10.845838+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:11.846010+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:12.846329+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:13.846547+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 1040384 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:14.846737+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:15.846863+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:16.847067+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:17.847262+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:18.847413+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:19.847565+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:20.847768+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:21.848052+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:22.848273+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:23.848520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:24.848712+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:25.849042+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:26.849278+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:27.849524+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:28.849764+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:29.850042+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:30.850314+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:31.850521+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:32.850819+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:33.851264+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:34.851451+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:35.851600+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:36.851759+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:37.851896+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:38.852049+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:39.852223+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:40.852382+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:41.852602+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:42.852824+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:43.853038+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:44.853211+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:45.853401+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:46.853603+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:47.853741+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:48.853902+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:49.854054+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:50.854205+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:51.854357+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:52.854543+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:53.854751+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:54.854936+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:55.855709+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:56.856179+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:57.856657+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 1032192 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:58.857738+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 1024000 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:18:59.857914+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 1024000 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:00.858213+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:01.858468+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:02.858884+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:03.859486+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:04.859696+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:05.859921+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:06.860172+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:07.860526+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:08.860797+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:09.860977+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:10.861179+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:11.861398+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:12.861559+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:13.861736+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:14.861920+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:15.862172+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:16.862411+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:17.862612+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:18.862913+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:19.863150+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:20.863311+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:21.863477+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:22.863623+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:23.863935+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:24.864100+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:25.864315+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:26.864498+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:27.864737+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:28.865321+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:29.865922+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:30.867568+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:31.868068+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:32.869247+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:33.869987+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:34.870946+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:35.871216+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:36.871366+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:37.871540+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:38.872413+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:39.873203+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:40.873395+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:41.873584+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:42.874097+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:43.874360+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:44.874656+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:45.874937+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:46.875097+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:47.875352+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:48.875520+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:49.875651+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:50.875803+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:51.875962+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:52.876078+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:53.876454+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:54.876622+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:55.876800+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:56.877062+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:57.877190+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:58.877310+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:19:59.877472+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 1015808 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:13 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:00.877621+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:01.877771+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:02.877900+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:03.878053+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:13 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:04.878249+0000)
Jan 31 06:32:13 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:13 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:05.878379+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:06.878549+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:07.878967+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:08.879255+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:09.879566+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:10.879892+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:11.880086+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:12.880224+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:13.880543+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:14.880749+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:15.880957+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:16.881144+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:17.881318+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:18.881486+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:19.881686+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:20.881855+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:21.882074+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:22.882283+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:23.882508+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:24.882695+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:25.882861+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:26.883046+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:27.883306+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:28.883537+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:29.883688+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:30.883838+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 1007616 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:31.883991+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:32.884216+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:33.884438+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:34.884620+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:35.884807+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:36.885017+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:37.885220+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:38.885447+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:39.885658+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:40.885883+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:41.886057+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:42.886336+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:43.886518+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:44.886659+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:45.886787+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:46.886945+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:47.887075+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:48.887203+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 991232 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:49.887433+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:50.887632+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:51.887749+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:52.887911+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:53.888086+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:54.888254+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:55.888423+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:56.888579+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:57.888771+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:58.889013+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:20:59.889286+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:00.889478+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:01.889618+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:02.889786+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:03.890023+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 999424 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:04.890196+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:05.890359+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:06.890497+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:07.890677+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:08.890801+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:09.890939+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:10.891078+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:11.891210+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:12.891351+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:13.891534+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:14.891689+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:15.891814+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:16.891959+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:17.892157+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:18.892309+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:19.892490+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:20.892717+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:21.892849+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:22.892976+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:23.893203+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:24.893440+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:25.893611+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:26.893887+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:27.894037+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:28.894204+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:29.894350+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:30.894498+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:31.894646+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:32.894848+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:33.895078+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:34.895256+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:35.895458+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:36.895688+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:37.895846+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:38.896021+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:39.896253+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:40.896460+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:41.896658+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:42.896794+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:43.897009+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:44.897225+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:45.897446+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:46.897666+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:47.897867+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:48.898080+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:49.898452+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:50.898616+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:51.898805+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:52.898953+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:53.899171+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:54.899368+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:55.899542+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:56.899668+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:57.899852+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:58.900012+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:21:59.900240+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:00.900386+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:01.900532+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:02.900688+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:03.900858+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:04.901023+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:05.901166+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:06.901298+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:07.901462+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:08.901592+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:09.901723+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:10.901881+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:11.902039+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:12.902174+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:13.902391+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:14.902527+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:15.902682+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:16.902806+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:17.902943+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:18.903075+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:19.903205+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:20.903341+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:21.903570+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:22.903773+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:23.903981+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:24.904197+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:25.904382+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:26.904543+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:27.905886+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:28.906079+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:29.906256+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:30.906380+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:31.906526+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:32.906692+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:33.906853+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:34.907009+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:35.907191+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:36.907422+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:37.907578+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:38.907852+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:39.908024+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:40.908200+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:41.908364+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:42.908577+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:43.908778+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:44.908949+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:45.909083+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:46.909269+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:47.909413+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:48.909601+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:49.909752+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:50.909950+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:51.910476+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:52.910724+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:53.910938+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:54.911174+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:55.911370+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:56.911510+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:57.911646+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:58.911858+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:22:59.912077+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:00.912220+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:01.912354+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:02.912519+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:03.912735+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:04.912929+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:05.913068+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:06.913204+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:07.913340+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:08.913485+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:09.913659+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:10.913805+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:11.913956+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:12.914099+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:13.914276+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:14.914393+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:15.914519+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:16.914694+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:17.914874+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:18.915071+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:19.915208+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:20.915407+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:21.915571+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:22.915719+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:23.915889+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:24.916085+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:25.916246+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:26.916451+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:27.916614+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:28.919665+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:29.919765+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:30.919916+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:31.920088+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:32.920278+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:33.920451+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:34.920598+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:35.920766+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:36.920948+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:37.921076+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:38.921234+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:39.921363+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:40.921623+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:41.921782+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:42.921963+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:43.922321+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:44.922435+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:45.922563+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:46.922903+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:47.923057+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:48.923207+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:49.923328+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:50.923471+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:51.923622+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:52.923791+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:53.923960+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:54.924096+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:55.924242+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:56.924367+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:57.924469+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:58.924615+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:23:59.924770+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:00.924885+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:01.925021+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:02.925183+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:03.925341+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:04.925486+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:05.925652+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:06.925831+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:07.925934+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:08.926059+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 6064 writes, 25K keys, 6064 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6064 writes, 1091 syncs, 5.56 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:09.926218+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:10.926378+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:11.926541+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:12.926708+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:13.926885+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:14.927238+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78569472 unmapped: 1114112 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:15.927391+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:16.927570+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:17.927725+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:18.927929+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:19.928171+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:20.928403+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: mgrc ms_handle_reset ms_handle_reset con 0x55b655efb000
Jan 31 06:32:14 compute-0 ceph-osd[86016]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/945587794
Jan 31 06:32:14 compute-0 ceph-osd[86016]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/945587794,v1:192.168.122.100:6801/945587794]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: get_auth_request con 0x55b656657000 auth_method 0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: mgrc handle_mgr_configure stats_period=5
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:21.928784+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 942080 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:22.928987+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 942080 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:23.929184+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 ms_handle_reset con 0x55b6543b6c00 session 0x55b6539de1c0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b6543b7c00
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:24.929399+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:25.929605+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:26.929817+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:27.930003+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:28.930221+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:29.930430+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:30.930641+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:31.930861+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:32.931056+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:33.931293+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:34.931536+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:35.931715+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:36.931882+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:37.932146+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:38.932340+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:39.932519+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:40.932672+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:41.932832+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:42.933045+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:43.933281+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:44.933470+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:45.933640+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:46.933863+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:47.934160+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:48.934410+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:49.934646+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:50.934794+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:51.934999+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:52.935242+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:53.935452+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:54.935670+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:55.935904+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:56.936091+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:57.936282+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:58.936490+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:24:59.936677+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:00.936857+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 811008 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:01.937030+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:02.937226+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:03.937417+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:04.937581+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:05.937750+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:06.938039+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:07.938214+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:08.938382+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:09.938560+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:10.938751+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:11.938932+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:12.939153+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:13.939392+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:14.939571+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 802816 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:15.939721+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:16.939884+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:17.940050+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:18.940236+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:19.940400+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:20.940514+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:21.940655+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:22.940778+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:23.940924+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 794624 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:24.941043+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 ms_handle_reset con 0x55b6543b6800 session 0x55b653c36c40
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b65695c000
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:25.941167+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:26.941289+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:27.941411+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:28.941554+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:29.941694+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:30.941818+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 647168 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:31.941969+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 596.871276855s of 599.230773926s, submitted: 106
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 778240 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:32.942167+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 778240 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:33.942312+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:34.942433+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:35.942582+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:36.942700+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:37.942874+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:38.943052+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:39.943234+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:40.943371+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:41.943638+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:42.943935+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:43.944293+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:44.944432+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:45.944601+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:46.944754+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:47.944956+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:48.945095+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:49.945246+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:50.945430+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:51.945598+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:52.945781+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:53.946001+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:54.946190+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:55.946355+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:56.946505+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:57.946683+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:58.946815+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:25:59.946965+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:00.947152+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:01.947293+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 1646592 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:02.947401+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:03.947549+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:04.947729+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:05.947898+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:06.948021+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:07.948181+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:08.948389+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:09.948526+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:10.948677+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:11.948842+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:12.949020+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:13.949205+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:14.949340+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:15.949507+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:16.949655+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:17.949774+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:18.949939+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:19.950181+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:20.950309+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:21.950424+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:22.950567+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:14 compute-0 ceph-mgr[75550]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 06:32:14 compute-0 ceph-797ee2fc-ca49-5eee-87c0-542bb035a7d7-mgr-compute-0-vavqfa[75546]: 2026-01-31T06:32:14.035+0000 7fc402b84640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:23.950773+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:24.950850+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:25.950983+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:26.951080+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:27.951208+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 1638400 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:28.951352+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:29.951481+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:30.951601+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:31.951733+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:32.951843+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:33.951955+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:34.952079+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:35.952219+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:36.952397+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:37.952527+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:38.952700+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:39.952805+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:40.952997+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:41.953207+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:42.953361+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:43.953520+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:44.953646+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:45.953774+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:46.953894+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:47.954061+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:48.954212+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:49.954339+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:50.954470+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:51.954619+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:52.954798+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:53.954944+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:54.955105+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:55.955283+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:56.955409+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:57.955566+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:58.955712+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:26:59.955828+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:00.955998+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:01.956167+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034561 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:02.956310+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xc92c1/0x19a000, compress 0x0/0x0/0x0, omap 0x17796, meta 0x2bb886a), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:03.956487+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _renew_subs
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 91.944183350s of 92.321907043s, submitted: 124
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 1630208 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:04.956613+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79118336 unmapped: 1613824 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:05.956791+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fce8a000/0x0/0x4ffc00000, data 0xcca4d/0x1a0000, compress 0x0/0x0/0x0, omap 0x17c69, meta 0x2bb8397), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b65695c800
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b65695cc00
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 135 ms_handle_reset con 0x55b65695c800 session 0x55b655ee9880
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 135 ms_handle_reset con 0x55b65695cc00 session 0x55b655ef1dc0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 1351680 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:06.956950+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044067 data_alloc: 218103808 data_used: 6057
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 1351680 heap: 80732160 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:07.957144+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: handle_auth_request added challenge on 0x55b65695d000
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 9510912 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:08.957273+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce88000/0x0/0x4ffc00000, data 0xce628/0x1a4000, compress 0x0/0x0/0x0, omap 0x17f2b, meta 0x2bb80d5), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80691200 unmapped: 16826368 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:09.957390+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 137 ms_handle_reset con 0x55b65695d000 session 0x55b655ef0700
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:10.957528+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:11.957667+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089982 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:12.957895+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:13.958080+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc683000/0x0/0x4ffc00000, data 0x8d01e0/0x9a7000, compress 0x0/0x0/0x0, omap 0x181ef, meta 0x2bb7e11), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.364054680s of 10.160212517s, submitted: 31
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:14.958236+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:15.958394+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:16.958551+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:17.958721+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:18.958886+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:19.959040+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:20.959212+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:21.959352+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:22.959543+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:23.959699+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:24.959836+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:25.959974+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:26.960089+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:27.960223+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:28.960362+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:29.960512+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:30.960623+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:31.960867+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 16785408 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:32.961215+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:33.961468+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:34.961707+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:35.961882+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:36.962042+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:37.962170+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:38.962301+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:39.962757+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:40.962880+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:41.963041+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:42.963242+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:43.963454+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:44.963632+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:45.963828+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:46.964021+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:47.964221+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:48.964459+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:49.964666+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:50.964829+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:51.965000+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:52.965146+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:53.965327+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:54.965504+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:55.965677+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:56.965873+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:57.966038+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:58.966162+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:27:59.966329+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:00.966526+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:01.966648+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:02.966809+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:03.967027+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:04.967182+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:05.967320+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:06.967469+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:07.967598+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:08.967743+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:09.967895+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:10.968059+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:11.968215+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:12.968354+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:13.968538+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:14.968727+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:15.968866+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:16.969054+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:17.969173+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:18.969316+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:19.969428+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:20.969588+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:21.969725+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:22.969948+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:23.970199+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:24.970564+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:25.970719+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:26.970916+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:27.971099+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:28.971350+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:29.971477+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:30.971647+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:31.971794+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:32.971929+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:33.972097+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:34.972299+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:35.972459+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:36.972641+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:37.972817+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:38.972923+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:39.973200+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:40.973333+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:41.973501+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:42.973651+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:43.974141+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:44.974318+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:45.974453+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:46.974599+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:47.974780+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:48.974943+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:49.975097+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:50.975331+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:51.975482+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:52.975685+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:53.975898+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:54.976079+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:55.976249+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:56.976416+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:57.976596+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:58.976728+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:28:59.976854+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:00.977007+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:01.977185+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:02.977327+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:03.977636+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:04.977788+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:05.977938+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:06.978054+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:07.978225+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:08.978439+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:09.978622+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:10.978847+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:11.979101+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:12.979321+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:13.979653+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:14.979885+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:15.980087+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:16.980295+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:17.980477+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:18.980651+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:19.980831+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:20.981143+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:21.981428+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:22.981636+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:23.981865+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:24.982029+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:25.982404+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:26.982562+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:27.982726+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:28.982972+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:29.983222+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:30.983369+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:31.983622+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:32.983836+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:33.984165+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:34.984383+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:35.984542+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:36.984725+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:37.984911+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:38.985159+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:39.985417+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:40.985583+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:41.985689+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:42.985813+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:43.986149+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:44.986266+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:45.986403+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:46.986544+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:47.986670+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:48.986827+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:49.986978+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:50.987177+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:51.987327+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:52.987448+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:53.987585+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:54.987776+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:55.987924+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:56.988215+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:57.988350+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:58.988576+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:29:59.988722+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:00.988886+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:01.989161+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:02.989294+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:03.989486+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:04.989677+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:05.989869+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:06.990057+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:07.990213+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:08.990382+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:09.990561+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:10.990808+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:11.990994+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:12.991168+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:13.991360+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 16777216 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:14.991495+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:15.991657+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:16.991885+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:17.992030+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:18.992198+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:19.992363+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:20.992455+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:21.992615+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:22.992758+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:23.992915+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:24.993067+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:25.993224+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:26.993311+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:27.993442+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:28.993571+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:29.993758+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:30.993968+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:31.994177+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:32.994280+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:33.994441+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:34.994568+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:35.994717+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:36.994848+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:37.994983+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:38.995156+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:39.995286+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:40.995409+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:41.995535+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:42.995672+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:43.996338+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:44.996984+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:45.997744+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:46.998059+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:47.998205+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:48.998451+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:49.998620+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:50.998806+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:51.998954+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:52.999073+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:53.999525+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:54.999700+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:55.999815+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:57.000019+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:58.000198+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:30:59.000413+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:00.000596+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:01.000836+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:02.000978+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:03.001104+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:04.001313+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:05.001524+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:06.001732+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:07.002059+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:08.002295+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:09.002437+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:10.002652+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:11.002814+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:12.003028+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:13.003198+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:14.003363+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:15.003563+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:16.003689+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:17.003822+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:18.003987+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:19.004151+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:20.004316+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:21.004443+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:22.004602+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:23.004755+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:24.004927+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:25.005089+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:26.005256+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:27.005417+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:28.005563+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:29.005693+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:30.005797+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:31.005927+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:32.006093+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:33.006286+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:34.006436+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:35.006559+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:36.006700+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:37.006803+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:38.006932+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:39.007057+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80748544 unmapped: 16769024 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc680000/0x0/0x4ffc00000, data 0x8d1c5f/0x9aa000, compress 0x0/0x0/0x0, omap 0x183f5, meta 0x2bb7c0b), peers [1,2] op hist [])
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:40.007200+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 16728064 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:41.007307+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'config diff' '{prefix=config diff}'
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'config show' '{prefix=config show}'
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 16211968 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:42.007419+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 16433152 heap: 97517568 old mem: 2845415832 new mem: 2845415832
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: tick
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_tickets
Jan 31 06:32:14 compute-0 ceph-osd[86016]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T06:31:43.007578+0000)
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 06:32:14 compute-0 ceph-osd[86016]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 06:32:14 compute-0 ceph-osd[86016]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092452 data_alloc: 218103808 data_used: 6642
Jan 31 06:32:14 compute-0 ceph-osd[86016]: do_command 'log dump' '{prefix=log dump}'
Jan 31 06:32:14 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:14 compute-0 ceph-mon[75251]: from='client.14570 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:14 compute-0 ceph-mon[75251]: from='client.14574 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: from='client.14578 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1041943892' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4089953822' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 06:32:14 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:32:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 31 06:32:14 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835885325' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:32:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 31 06:32:14 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137607027' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 06:32:14 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 31 06:32:14 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495887810' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 06:32:15 compute-0 ceph-mgr[75550]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 06:32:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 31 06:32:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779888951' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 31 06:32:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775638988' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: from='client.14582 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: from='client.14586 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:15 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3835885325' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4137607027' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3495887810' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 06:32:15 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 31 06:32:15 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766412801' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 31 06:32:16 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849246378' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 31 06:32:16 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523456396' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 31 06:32:16 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138261336' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 06:32:16 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 06:32:16 compute-0 systemd[1]: Started Hostname Service.
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1779888951' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2775638988' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3766412801' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/849246378' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1523456396' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 06:32:16 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4138261336' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 06:32:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 06:32:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829745247' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 06:32:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 31 06:32:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627672102' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 06:32:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 31 06:32:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173148738' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 06:32:17 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 31 06:32:17 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664692562' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 06:32:18 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:18 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:18 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 31 06:32:18 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137974138' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 06:32:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/2829745247' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 06:32:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/627672102' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 06:32:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/173148738' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 06:32:18 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3664692562' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 06:32:19 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:19 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14619 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:19 compute-0 sshd-session[257486]: Invalid user ubuntu from 45.148.10.240 port 59098
Jan 31 06:32:19 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:32:19 compute-0 sshd-session[257486]: Connection closed by invalid user ubuntu 45.148.10.240 port 59098 [preauth]
Jan 31 06:32:19 compute-0 ceph-mon[75251]: pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:19 compute-0 ceph-mon[75251]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:19 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/137974138' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 06:32:19 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:20 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} v 0)
Jan 31 06:32:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 31 06:32:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249787880' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 06:32:20 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} v 0)
Jan 31 06:32:20 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='client.14620 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='client.14619 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='client.14622 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='client.14626 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:20 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/4249787880' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 06:32:21 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 31 06:32:21 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740390040' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 06:32:21 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14634 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:22 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:22 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:22 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 31 06:32:22 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3077146747' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 06:32:23 compute-0 ceph-mon[75251]: from='client.14630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:23 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.hdercq", "name": "rgw_frontends"} : dispatch
Jan 31 06:32:23 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/1740390040' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 06:32:23 compute-0 sudo[257767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:32:23 compute-0 sudo[257767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:32:23 compute-0 sudo[257767]: pam_unix(sudo:session): session closed for user root
Jan 31 06:32:23 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:23 compute-0 sudo[257793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 06:32:23 compute-0 sudo[257793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:32:23 compute-0 sudo[257793]: pam_unix(sudo:session): session closed for user root
Jan 31 06:32:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:32:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:32:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 06:32:23 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:32:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 06:32:23 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 31 06:32:23 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208705244' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' 
Jan 31 06:32:24 compute-0 ceph-mgr[75550]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 06:32:24 compute-0 sudo[257940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 06:32:24 compute-0 sudo[257940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:32:24 compute-0 sudo[257940]: pam_unix(sudo:session): session closed for user root
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='client.14634 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='client.14638 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/3077146747' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='client.14640 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='mgr.14122 192.168.122.100:0/2019531543' entity='mgr.compute-0.vavqfa' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: from='client.? 192.168.122.100:0/208705244' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 06:32:24 compute-0 sudo[257971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/797ee2fc-ca49-5eee-87c0-542bb035a7d7/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 797ee2fc-ca49-5eee-87c0-542bb035a7d7 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 06:32:24 compute-0 sudo[257971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 06:32:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 06:32:24 compute-0 podman[258022]: 2026-01-31 06:32:24.462712733 +0000 UTC m=+0.023381360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 06:32:24 compute-0 ceph-mon[75251]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 31 06:32:24 compute-0 ceph-mon[75251]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743110875' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 31 06:32:25 compute-0 podman[258022]: 2026-01-31 06:32:25.281890558 +0000 UTC m=+0.842559185 container create eb8633531ac78c34c0291f2ba78470bc892515dd5fa02226f8fc3a6cffac500f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 06:32:25 compute-0 ceph-mgr[75550]: log_channel(audit) log [DBG] : from='client.14654 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
